Oct  2 06:48:58 np0005466030 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  2 06:48:58 np0005466030 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  2 06:48:58 np0005466030 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:48:58 np0005466030 kernel: BIOS-provided physical RAM map:
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  2 06:48:58 np0005466030 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  2 06:48:58 np0005466030 kernel: NX (Execute Disable) protection: active
Oct  2 06:48:58 np0005466030 kernel: APIC: Static calls initialized
Oct  2 06:48:58 np0005466030 kernel: SMBIOS 2.8 present.
Oct  2 06:48:58 np0005466030 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  2 06:48:58 np0005466030 kernel: Hypervisor detected: KVM
Oct  2 06:48:58 np0005466030 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  2 06:48:58 np0005466030 kernel: kvm-clock: using sched offset of 4117678470 cycles
Oct  2 06:48:58 np0005466030 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  2 06:48:58 np0005466030 kernel: tsc: Detected 2799.886 MHz processor
Oct  2 06:48:58 np0005466030 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  2 06:48:58 np0005466030 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  2 06:48:58 np0005466030 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  2 06:48:58 np0005466030 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  2 06:48:58 np0005466030 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  2 06:48:58 np0005466030 kernel: Using GB pages for direct mapping
Oct  2 06:48:58 np0005466030 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  2 06:48:58 np0005466030 kernel: ACPI: Early table checksum verification disabled
Oct  2 06:48:58 np0005466030 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  2 06:48:58 np0005466030 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:48:58 np0005466030 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:48:58 np0005466030 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:48:58 np0005466030 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  2 06:48:58 np0005466030 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:48:58 np0005466030 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  2 06:48:58 np0005466030 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  2 06:48:58 np0005466030 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  2 06:48:58 np0005466030 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  2 06:48:58 np0005466030 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  2 06:48:58 np0005466030 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  2 06:48:58 np0005466030 kernel: No NUMA configuration found
Oct  2 06:48:58 np0005466030 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  2 06:48:58 np0005466030 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct  2 06:48:58 np0005466030 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  2 06:48:58 np0005466030 kernel: Zone ranges:
Oct  2 06:48:58 np0005466030 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  2 06:48:58 np0005466030 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  2 06:48:58 np0005466030 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 06:48:58 np0005466030 kernel:  Device   empty
Oct  2 06:48:58 np0005466030 kernel: Movable zone start for each node
Oct  2 06:48:58 np0005466030 kernel: Early memory node ranges
Oct  2 06:48:58 np0005466030 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  2 06:48:58 np0005466030 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  2 06:48:58 np0005466030 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  2 06:48:58 np0005466030 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  2 06:48:58 np0005466030 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  2 06:48:58 np0005466030 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  2 06:48:58 np0005466030 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  2 06:48:58 np0005466030 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  2 06:48:58 np0005466030 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  2 06:48:58 np0005466030 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  2 06:48:58 np0005466030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  2 06:48:58 np0005466030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  2 06:48:58 np0005466030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  2 06:48:58 np0005466030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  2 06:48:58 np0005466030 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  2 06:48:58 np0005466030 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  2 06:48:58 np0005466030 kernel: TSC deadline timer available
Oct  2 06:48:58 np0005466030 kernel: CPU topo: Max. logical packages:   8
Oct  2 06:48:58 np0005466030 kernel: CPU topo: Max. logical dies:       8
Oct  2 06:48:58 np0005466030 kernel: CPU topo: Max. dies per package:   1
Oct  2 06:48:58 np0005466030 kernel: CPU topo: Max. threads per core:   1
Oct  2 06:48:58 np0005466030 kernel: CPU topo: Num. cores per package:     1
Oct  2 06:48:58 np0005466030 kernel: CPU topo: Num. threads per package:   1
Oct  2 06:48:58 np0005466030 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  2 06:48:58 np0005466030 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  2 06:48:58 np0005466030 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  2 06:48:58 np0005466030 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  2 06:48:58 np0005466030 kernel: Booting paravirtualized kernel on KVM
Oct  2 06:48:58 np0005466030 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  2 06:48:58 np0005466030 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  2 06:48:58 np0005466030 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  2 06:48:58 np0005466030 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  2 06:48:58 np0005466030 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:48:58 np0005466030 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  2 06:48:58 np0005466030 kernel: random: crng init done
Oct  2 06:48:58 np0005466030 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: Fallback order for Node 0: 0 
Oct  2 06:48:58 np0005466030 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  2 06:48:58 np0005466030 kernel: Policy zone: Normal
Oct  2 06:48:58 np0005466030 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  2 06:48:58 np0005466030 kernel: software IO TLB: area num 8.
Oct  2 06:48:58 np0005466030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  2 06:48:58 np0005466030 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  2 06:48:58 np0005466030 kernel: ftrace: allocated 193 pages with 3 groups
Oct  2 06:48:58 np0005466030 kernel: Dynamic Preempt: voluntary
Oct  2 06:48:58 np0005466030 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  2 06:48:58 np0005466030 kernel: rcu: #011RCU event tracing is enabled.
Oct  2 06:48:58 np0005466030 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  2 06:48:58 np0005466030 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  2 06:48:58 np0005466030 kernel: #011Rude variant of Tasks RCU enabled.
Oct  2 06:48:58 np0005466030 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  2 06:48:58 np0005466030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  2 06:48:58 np0005466030 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  2 06:48:58 np0005466030 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:48:58 np0005466030 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:48:58 np0005466030 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  2 06:48:58 np0005466030 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  2 06:48:58 np0005466030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  2 06:48:58 np0005466030 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  2 06:48:58 np0005466030 kernel: Console: colour VGA+ 80x25
Oct  2 06:48:58 np0005466030 kernel: printk: console [ttyS0] enabled
Oct  2 06:48:58 np0005466030 kernel: ACPI: Core revision 20230331
Oct  2 06:48:58 np0005466030 kernel: APIC: Switch to symmetric I/O mode setup
Oct  2 06:48:58 np0005466030 kernel: x2apic enabled
Oct  2 06:48:58 np0005466030 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  2 06:48:58 np0005466030 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  2 06:48:58 np0005466030 kernel: Calibrating delay loop (skipped) preset value.. 5599.77 BogoMIPS (lpj=2799886)
Oct  2 06:48:58 np0005466030 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  2 06:48:58 np0005466030 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  2 06:48:58 np0005466030 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  2 06:48:58 np0005466030 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  2 06:48:58 np0005466030 kernel: Spectre V2 : Mitigation: Retpolines
Oct  2 06:48:58 np0005466030 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  2 06:48:58 np0005466030 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  2 06:48:58 np0005466030 kernel: RETBleed: Mitigation: untrained return thunk
Oct  2 06:48:58 np0005466030 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  2 06:48:58 np0005466030 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  2 06:48:58 np0005466030 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  2 06:48:58 np0005466030 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  2 06:48:58 np0005466030 kernel: x86/bugs: return thunk changed
Oct  2 06:48:58 np0005466030 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  2 06:48:58 np0005466030 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  2 06:48:58 np0005466030 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  2 06:48:58 np0005466030 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  2 06:48:58 np0005466030 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  2 06:48:58 np0005466030 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  2 06:48:58 np0005466030 kernel: Freeing SMP alternatives memory: 40K
Oct  2 06:48:58 np0005466030 kernel: pid_max: default: 32768 minimum: 301
Oct  2 06:48:58 np0005466030 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  2 06:48:58 np0005466030 kernel: landlock: Up and running.
Oct  2 06:48:58 np0005466030 kernel: Yama: becoming mindful.
Oct  2 06:48:58 np0005466030 kernel: SELinux:  Initializing.
Oct  2 06:48:58 np0005466030 kernel: LSM support for eBPF active
Oct  2 06:48:58 np0005466030 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  2 06:48:58 np0005466030 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  2 06:48:58 np0005466030 kernel: ... version:                0
Oct  2 06:48:58 np0005466030 kernel: ... bit width:              48
Oct  2 06:48:58 np0005466030 kernel: ... generic registers:      6
Oct  2 06:48:58 np0005466030 kernel: ... value mask:             0000ffffffffffff
Oct  2 06:48:58 np0005466030 kernel: ... max period:             00007fffffffffff
Oct  2 06:48:58 np0005466030 kernel: ... fixed-purpose events:   0
Oct  2 06:48:58 np0005466030 kernel: ... event mask:             000000000000003f
Oct  2 06:48:58 np0005466030 kernel: signal: max sigframe size: 1776
Oct  2 06:48:58 np0005466030 kernel: rcu: Hierarchical SRCU implementation.
Oct  2 06:48:58 np0005466030 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  2 06:48:58 np0005466030 kernel: smp: Bringing up secondary CPUs ...
Oct  2 06:48:58 np0005466030 kernel: smpboot: x86: Booting SMP configuration:
Oct  2 06:48:58 np0005466030 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  2 06:48:58 np0005466030 kernel: smp: Brought up 1 node, 8 CPUs
Oct  2 06:48:58 np0005466030 kernel: smpboot: Total of 8 processors activated (44798.17 BogoMIPS)
Oct  2 06:48:58 np0005466030 kernel: node 0 deferred pages initialised in 20ms
Oct  2 06:48:58 np0005466030 kernel: Memory: 7765484K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct  2 06:48:58 np0005466030 kernel: devtmpfs: initialized
Oct  2 06:48:58 np0005466030 kernel: x86/mm: Memory block size: 128MB
Oct  2 06:48:58 np0005466030 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  2 06:48:58 np0005466030 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: pinctrl core: initialized pinctrl subsystem
Oct  2 06:48:58 np0005466030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  2 06:48:58 np0005466030 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  2 06:48:58 np0005466030 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  2 06:48:58 np0005466030 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  2 06:48:58 np0005466030 kernel: audit: initializing netlink subsys (disabled)
Oct  2 06:48:58 np0005466030 kernel: audit: type=2000 audit(1759402137.232:1): state=initialized audit_enabled=0 res=1
Oct  2 06:48:58 np0005466030 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  2 06:48:58 np0005466030 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  2 06:48:58 np0005466030 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  2 06:48:58 np0005466030 kernel: cpuidle: using governor menu
Oct  2 06:48:58 np0005466030 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  2 06:48:58 np0005466030 kernel: PCI: Using configuration type 1 for base access
Oct  2 06:48:58 np0005466030 kernel: PCI: Using configuration type 1 for extended access
Oct  2 06:48:58 np0005466030 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  2 06:48:58 np0005466030 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  2 06:48:58 np0005466030 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  2 06:48:58 np0005466030 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  2 06:48:58 np0005466030 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  2 06:48:58 np0005466030 kernel: Demotion targets for Node 0: null
Oct  2 06:48:58 np0005466030 kernel: cryptd: max_cpu_qlen set to 1000
Oct  2 06:48:58 np0005466030 kernel: ACPI: Added _OSI(Module Device)
Oct  2 06:48:58 np0005466030 kernel: ACPI: Added _OSI(Processor Device)
Oct  2 06:48:58 np0005466030 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  2 06:48:58 np0005466030 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  2 06:48:58 np0005466030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  2 06:48:58 np0005466030 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  2 06:48:58 np0005466030 kernel: ACPI: Interpreter enabled
Oct  2 06:48:58 np0005466030 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  2 06:48:58 np0005466030 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  2 06:48:58 np0005466030 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  2 06:48:58 np0005466030 kernel: PCI: Using E820 reservations for host bridge windows
Oct  2 06:48:58 np0005466030 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  2 06:48:58 np0005466030 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  2 06:48:58 np0005466030 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [3] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [4] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [5] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [6] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [7] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [8] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [9] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [10] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [11] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [12] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [13] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [14] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [15] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [16] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [17] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [18] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [19] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [20] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [21] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [22] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [23] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [24] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [25] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [26] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [27] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [28] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [29] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [30] registered
Oct  2 06:48:58 np0005466030 kernel: acpiphp: Slot [31] registered
Oct  2 06:48:58 np0005466030 kernel: PCI host bridge to bus 0000:00
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  2 06:48:58 np0005466030 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  2 06:48:58 np0005466030 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  2 06:48:58 np0005466030 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  2 06:48:58 np0005466030 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  2 06:48:58 np0005466030 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  2 06:48:58 np0005466030 kernel: iommu: Default domain type: Translated
Oct  2 06:48:58 np0005466030 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  2 06:48:58 np0005466030 kernel: SCSI subsystem initialized
Oct  2 06:48:58 np0005466030 kernel: ACPI: bus type USB registered
Oct  2 06:48:58 np0005466030 kernel: usbcore: registered new interface driver usbfs
Oct  2 06:48:58 np0005466030 kernel: usbcore: registered new interface driver hub
Oct  2 06:48:58 np0005466030 kernel: usbcore: registered new device driver usb
Oct  2 06:48:58 np0005466030 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  2 06:48:58 np0005466030 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  2 06:48:58 np0005466030 kernel: PTP clock support registered
Oct  2 06:48:58 np0005466030 kernel: EDAC MC: Ver: 3.0.0
Oct  2 06:48:58 np0005466030 kernel: NetLabel: Initializing
Oct  2 06:48:58 np0005466030 kernel: NetLabel:  domain hash size = 128
Oct  2 06:48:58 np0005466030 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  2 06:48:58 np0005466030 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  2 06:48:58 np0005466030 kernel: PCI: Using ACPI for IRQ routing
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  2 06:48:58 np0005466030 kernel: vgaarb: loaded
Oct  2 06:48:58 np0005466030 kernel: clocksource: Switched to clocksource kvm-clock
Oct  2 06:48:58 np0005466030 kernel: VFS: Disk quotas dquot_6.6.0
Oct  2 06:48:58 np0005466030 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  2 06:48:58 np0005466030 kernel: pnp: PnP ACPI init
Oct  2 06:48:58 np0005466030 kernel: pnp: PnP ACPI: found 5 devices
Oct  2 06:48:58 np0005466030 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  2 06:48:58 np0005466030 kernel: NET: Registered PF_INET protocol family
Oct  2 06:48:58 np0005466030 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  2 06:48:58 np0005466030 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  2 06:48:58 np0005466030 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  2 06:48:58 np0005466030 kernel: NET: Registered PF_XDP protocol family
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  2 06:48:58 np0005466030 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  2 06:48:58 np0005466030 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  2 06:48:58 np0005466030 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 72333 usecs
Oct  2 06:48:58 np0005466030 kernel: PCI: CLS 0 bytes, default 64
Oct  2 06:48:58 np0005466030 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  2 06:48:58 np0005466030 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  2 06:48:58 np0005466030 kernel: ACPI: bus type thunderbolt registered
Oct  2 06:48:58 np0005466030 kernel: Trying to unpack rootfs image as initramfs...
Oct  2 06:48:58 np0005466030 kernel: Initialise system trusted keyrings
Oct  2 06:48:58 np0005466030 kernel: Key type blacklist registered
Oct  2 06:48:58 np0005466030 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  2 06:48:58 np0005466030 kernel: zbud: loaded
Oct  2 06:48:58 np0005466030 kernel: integrity: Platform Keyring initialized
Oct  2 06:48:58 np0005466030 kernel: integrity: Machine keyring initialized
Oct  2 06:48:58 np0005466030 kernel: Freeing initrd memory: 86104K
Oct  2 06:48:58 np0005466030 kernel: NET: Registered PF_ALG protocol family
Oct  2 06:48:58 np0005466030 kernel: xor: automatically using best checksumming function   avx       
Oct  2 06:48:58 np0005466030 kernel: Key type asymmetric registered
Oct  2 06:48:58 np0005466030 kernel: Asymmetric key parser 'x509' registered
Oct  2 06:48:58 np0005466030 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  2 06:48:58 np0005466030 kernel: io scheduler mq-deadline registered
Oct  2 06:48:58 np0005466030 kernel: io scheduler kyber registered
Oct  2 06:48:58 np0005466030 kernel: io scheduler bfq registered
Oct  2 06:48:58 np0005466030 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  2 06:48:58 np0005466030 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  2 06:48:58 np0005466030 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  2 06:48:58 np0005466030 kernel: ACPI: button: Power Button [PWRF]
Oct  2 06:48:58 np0005466030 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  2 06:48:58 np0005466030 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  2 06:48:58 np0005466030 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  2 06:48:58 np0005466030 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  2 06:48:58 np0005466030 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  2 06:48:58 np0005466030 kernel: Non-volatile memory driver v1.3
Oct  2 06:48:58 np0005466030 kernel: rdac: device handler registered
Oct  2 06:48:58 np0005466030 kernel: hp_sw: device handler registered
Oct  2 06:48:58 np0005466030 kernel: emc: device handler registered
Oct  2 06:48:58 np0005466030 kernel: alua: device handler registered
Oct  2 06:48:58 np0005466030 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  2 06:48:58 np0005466030 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  2 06:48:58 np0005466030 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  2 06:48:58 np0005466030 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  2 06:48:58 np0005466030 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  2 06:48:58 np0005466030 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  2 06:48:58 np0005466030 kernel: usb usb1: Product: UHCI Host Controller
Oct  2 06:48:58 np0005466030 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  2 06:48:58 np0005466030 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  2 06:48:58 np0005466030 kernel: hub 1-0:1.0: USB hub found
Oct  2 06:48:58 np0005466030 kernel: hub 1-0:1.0: 2 ports detected
Oct  2 06:48:58 np0005466030 kernel: usbcore: registered new interface driver usbserial_generic
Oct  2 06:48:58 np0005466030 kernel: usbserial: USB Serial support registered for generic
Oct  2 06:48:58 np0005466030 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  2 06:48:58 np0005466030 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  2 06:48:58 np0005466030 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  2 06:48:58 np0005466030 kernel: mousedev: PS/2 mouse device common for all mice
Oct  2 06:48:58 np0005466030 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  2 06:48:58 np0005466030 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  2 06:48:58 np0005466030 kernel: rtc_cmos 00:04: registered as rtc0
Oct  2 06:48:58 np0005466030 kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T10:48:57 UTC (1759402137)
Oct  2 06:48:58 np0005466030 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  2 06:48:58 np0005466030 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  2 06:48:58 np0005466030 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  2 06:48:58 np0005466030 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  2 06:48:58 np0005466030 kernel: usbcore: registered new interface driver usbhid
Oct  2 06:48:58 np0005466030 kernel: usbhid: USB HID core driver
Oct  2 06:48:58 np0005466030 kernel: drop_monitor: Initializing network drop monitor service
Oct  2 06:48:58 np0005466030 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  2 06:48:58 np0005466030 kernel: Initializing XFRM netlink socket
Oct  2 06:48:58 np0005466030 kernel: NET: Registered PF_INET6 protocol family
Oct  2 06:48:58 np0005466030 kernel: Segment Routing with IPv6
Oct  2 06:48:58 np0005466030 kernel: NET: Registered PF_PACKET protocol family
Oct  2 06:48:58 np0005466030 kernel: mpls_gso: MPLS GSO support
Oct  2 06:48:58 np0005466030 kernel: IPI shorthand broadcast: enabled
Oct  2 06:48:58 np0005466030 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  2 06:48:58 np0005466030 kernel: AES CTR mode by8 optimization enabled
Oct  2 06:48:58 np0005466030 kernel: sched_clock: Marking stable (1154002685, 145041584)->(1420934504, -121890235)
Oct  2 06:48:58 np0005466030 kernel: registered taskstats version 1
Oct  2 06:48:58 np0005466030 kernel: Loading compiled-in X.509 certificates
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  2 06:48:58 np0005466030 kernel: Demotion targets for Node 0: null
Oct  2 06:48:58 np0005466030 kernel: page_owner is disabled
Oct  2 06:48:58 np0005466030 kernel: Key type .fscrypt registered
Oct  2 06:48:58 np0005466030 kernel: Key type fscrypt-provisioning registered
Oct  2 06:48:58 np0005466030 kernel: Key type big_key registered
Oct  2 06:48:58 np0005466030 kernel: Key type encrypted registered
Oct  2 06:48:58 np0005466030 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  2 06:48:58 np0005466030 kernel: Loading compiled-in module X.509 certificates
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  2 06:48:58 np0005466030 kernel: ima: Allocated hash algorithm: sha256
Oct  2 06:48:58 np0005466030 kernel: ima: No architecture policies found
Oct  2 06:48:58 np0005466030 kernel: evm: Initialising EVM extended attributes:
Oct  2 06:48:58 np0005466030 kernel: evm: security.selinux
Oct  2 06:48:58 np0005466030 kernel: evm: security.SMACK64 (disabled)
Oct  2 06:48:58 np0005466030 kernel: evm: security.SMACK64EXEC (disabled)
Oct  2 06:48:58 np0005466030 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  2 06:48:58 np0005466030 kernel: evm: security.SMACK64MMAP (disabled)
Oct  2 06:48:58 np0005466030 kernel: evm: security.apparmor (disabled)
Oct  2 06:48:58 np0005466030 kernel: evm: security.ima
Oct  2 06:48:58 np0005466030 kernel: evm: security.capability
Oct  2 06:48:58 np0005466030 kernel: evm: HMAC attrs: 0x1
Oct  2 06:48:58 np0005466030 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  2 06:48:58 np0005466030 kernel: Running certificate verification RSA selftest
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  2 06:48:58 np0005466030 kernel: Running certificate verification ECDSA selftest
Oct  2 06:48:58 np0005466030 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  2 06:48:58 np0005466030 kernel: clk: Disabling unused clocks
Oct  2 06:48:58 np0005466030 kernel: Freeing unused decrypted memory: 2028K
Oct  2 06:48:58 np0005466030 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  2 06:48:58 np0005466030 kernel: Write protecting the kernel read-only data: 30720k
Oct  2 06:48:58 np0005466030 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  2 06:48:58 np0005466030 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  2 06:48:58 np0005466030 kernel: Run /init as init process
Oct  2 06:48:58 np0005466030 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 06:48:58 np0005466030 systemd: Detected virtualization kvm.
Oct  2 06:48:58 np0005466030 systemd: Detected architecture x86-64.
Oct  2 06:48:58 np0005466030 systemd: Running in initrd.
Oct  2 06:48:58 np0005466030 systemd: No hostname configured, using default hostname.
Oct  2 06:48:58 np0005466030 systemd: Hostname set to <localhost>.
Oct  2 06:48:58 np0005466030 systemd: Initializing machine ID from VM UUID.
Oct  2 06:48:58 np0005466030 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  2 06:48:58 np0005466030 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  2 06:48:58 np0005466030 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  2 06:48:58 np0005466030 kernel: usb 1-1: Manufacturer: QEMU
Oct  2 06:48:58 np0005466030 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  2 06:48:58 np0005466030 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  2 06:48:58 np0005466030 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  2 06:48:58 np0005466030 systemd: Queued start job for default target Initrd Default Target.
Oct  2 06:48:58 np0005466030 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 06:48:58 np0005466030 systemd: Reached target Local Encrypted Volumes.
Oct  2 06:48:58 np0005466030 systemd: Reached target Initrd /usr File System.
Oct  2 06:48:58 np0005466030 systemd: Reached target Local File Systems.
Oct  2 06:48:58 np0005466030 systemd: Reached target Path Units.
Oct  2 06:48:58 np0005466030 systemd: Reached target Slice Units.
Oct  2 06:48:58 np0005466030 systemd: Reached target Swaps.
Oct  2 06:48:58 np0005466030 systemd: Reached target Timer Units.
Oct  2 06:48:58 np0005466030 systemd: Listening on D-Bus System Message Bus Socket.
Oct  2 06:48:58 np0005466030 systemd: Listening on Journal Socket (/dev/log).
Oct  2 06:48:58 np0005466030 systemd: Listening on Journal Socket.
Oct  2 06:48:58 np0005466030 systemd: Listening on udev Control Socket.
Oct  2 06:48:58 np0005466030 systemd: Listening on udev Kernel Socket.
Oct  2 06:48:58 np0005466030 systemd: Reached target Socket Units.
Oct  2 06:48:58 np0005466030 systemd: Starting Create List of Static Device Nodes...
Oct  2 06:48:58 np0005466030 systemd: Starting Journal Service...
Oct  2 06:48:58 np0005466030 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 06:48:58 np0005466030 systemd: Starting Apply Kernel Variables...
Oct  2 06:48:58 np0005466030 systemd: Starting Create System Users...
Oct  2 06:48:58 np0005466030 systemd: Starting Setup Virtual Console...
Oct  2 06:48:58 np0005466030 systemd: Finished Create List of Static Device Nodes.
Oct  2 06:48:58 np0005466030 systemd: Finished Apply Kernel Variables.
Oct  2 06:48:58 np0005466030 systemd-journald[308]: Journal started
Oct  2 06:48:58 np0005466030 systemd-journald[308]: Runtime Journal (/run/log/journal/5d5cabb12c53462b89f316d4280c3e4c) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:48:58 np0005466030 systemd-sysusers[313]: Creating group 'users' with GID 100.
Oct  2 06:48:58 np0005466030 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Oct  2 06:48:58 np0005466030 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  2 06:48:58 np0005466030 systemd: Started Journal Service.
Oct  2 06:48:58 np0005466030 systemd[1]: Finished Create System Users.
Oct  2 06:48:58 np0005466030 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 06:48:58 np0005466030 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 06:48:58 np0005466030 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 06:48:58 np0005466030 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 06:48:58 np0005466030 systemd[1]: Finished Setup Virtual Console.
Oct  2 06:48:58 np0005466030 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  2 06:48:58 np0005466030 systemd[1]: Starting dracut cmdline hook...
Oct  2 06:48:58 np0005466030 dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Oct  2 06:48:58 np0005466030 dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  2 06:48:58 np0005466030 systemd[1]: Finished dracut cmdline hook.
Oct  2 06:48:58 np0005466030 systemd[1]: Starting dracut pre-udev hook...
Oct  2 06:48:58 np0005466030 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  2 06:48:58 np0005466030 kernel: device-mapper: uevent: version 1.0.3
Oct  2 06:48:58 np0005466030 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  2 06:48:58 np0005466030 kernel: RPC: Registered named UNIX socket transport module.
Oct  2 06:48:58 np0005466030 kernel: RPC: Registered udp transport module.
Oct  2 06:48:58 np0005466030 kernel: RPC: Registered tcp transport module.
Oct  2 06:48:58 np0005466030 kernel: RPC: Registered tcp-with-tls transport module.
Oct  2 06:48:58 np0005466030 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  2 06:48:58 np0005466030 rpc.statd[445]: Version 2.5.4 starting
Oct  2 06:48:58 np0005466030 rpc.statd[445]: Initializing NSM state
Oct  2 06:48:58 np0005466030 rpc.idmapd[450]: Setting log level to 0
Oct  2 06:48:58 np0005466030 systemd[1]: Finished dracut pre-udev hook.
Oct  2 06:48:58 np0005466030 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 06:48:58 np0005466030 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 06:48:58 np0005466030 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 06:48:58 np0005466030 systemd[1]: Starting dracut pre-trigger hook...
Oct  2 06:48:58 np0005466030 systemd[1]: Finished dracut pre-trigger hook.
Oct  2 06:48:58 np0005466030 systemd[1]: Starting Coldplug All udev Devices...
Oct  2 06:48:58 np0005466030 systemd[1]: Created slice Slice /system/modprobe.
Oct  2 06:48:58 np0005466030 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 06:48:58 np0005466030 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 06:48:58 np0005466030 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:48:58 np0005466030 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:48:58 np0005466030 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 06:48:58 np0005466030 systemd[1]: Reached target Network.
Oct  2 06:48:58 np0005466030 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  2 06:48:58 np0005466030 systemd[1]: Starting dracut initqueue hook...
Oct  2 06:48:58 np0005466030 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  2 06:48:58 np0005466030 systemd-udevd[467]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 06:48:58 np0005466030 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  2 06:48:58 np0005466030 kernel: vda: vda1
Oct  2 06:48:58 np0005466030 kernel: scsi host0: ata_piix
Oct  2 06:48:58 np0005466030 kernel: scsi host1: ata_piix
Oct  2 06:48:58 np0005466030 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  2 06:48:58 np0005466030 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  2 06:48:59 np0005466030 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Initrd Root Device.
Oct  2 06:48:59 np0005466030 kernel: ata1: found unknown device (class 0)
Oct  2 06:48:59 np0005466030 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  2 06:48:59 np0005466030 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  2 06:48:59 np0005466030 systemd[1]: Mounting Kernel Configuration File System...
Oct  2 06:48:59 np0005466030 systemd[1]: Mounted Kernel Configuration File System.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target System Initialization.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Basic System.
Oct  2 06:48:59 np0005466030 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  2 06:48:59 np0005466030 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  2 06:48:59 np0005466030 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  2 06:48:59 np0005466030 systemd[1]: Finished dracut initqueue hook.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Remote File Systems.
Oct  2 06:48:59 np0005466030 systemd[1]: Starting dracut pre-mount hook...
Oct  2 06:48:59 np0005466030 systemd[1]: Finished dracut pre-mount hook.
Oct  2 06:48:59 np0005466030 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  2 06:48:59 np0005466030 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct  2 06:48:59 np0005466030 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  2 06:48:59 np0005466030 systemd[1]: Mounting /sysroot...
Oct  2 06:48:59 np0005466030 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  2 06:48:59 np0005466030 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  2 06:48:59 np0005466030 kernel: XFS (vda1): Ending clean mount
Oct  2 06:48:59 np0005466030 systemd[1]: Mounted /sysroot.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Initrd Root File System.
Oct  2 06:48:59 np0005466030 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  2 06:48:59 np0005466030 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  2 06:48:59 np0005466030 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Initrd File Systems.
Oct  2 06:48:59 np0005466030 systemd[1]: Reached target Initrd Default Target.
Oct  2 06:48:59 np0005466030 systemd[1]: Starting dracut mount hook...
Oct  2 06:48:59 np0005466030 systemd[1]: Finished dracut mount hook.
Oct  2 06:48:59 np0005466030 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  2 06:49:00 np0005466030 rpc.idmapd[450]: exiting on signal 15
Oct  2 06:49:00 np0005466030 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  2 06:49:00 np0005466030 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Network.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Timer Units.
Oct  2 06:49:00 np0005466030 systemd[1]: dbus.socket: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  2 06:49:00 np0005466030 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Initrd Default Target.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Basic System.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Initrd Root Device.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Initrd /usr File System.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Path Units.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Remote File Systems.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Slice Units.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Socket Units.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target System Initialization.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Local File Systems.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Swaps.
Oct  2 06:49:00 np0005466030 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped dracut mount hook.
Oct  2 06:49:00 np0005466030 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped dracut pre-mount hook.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  2 06:49:00 np0005466030 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped dracut initqueue hook.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Coldplug All udev Devices.
Oct  2 06:49:00 np0005466030 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped dracut pre-trigger hook.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Setup Virtual Console.
Oct  2 06:49:00 np0005466030 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Closed udev Control Socket.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Closed udev Kernel Socket.
Oct  2 06:49:00 np0005466030 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped dracut pre-udev hook.
Oct  2 06:49:00 np0005466030 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped dracut cmdline hook.
Oct  2 06:49:00 np0005466030 systemd[1]: Starting Cleanup udev Database...
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  2 06:49:00 np0005466030 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  2 06:49:00 np0005466030 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Stopped Create System Users.
Oct  2 06:49:00 np0005466030 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  2 06:49:00 np0005466030 systemd[1]: Finished Cleanup udev Database.
Oct  2 06:49:00 np0005466030 systemd[1]: Reached target Switch Root.
Oct  2 06:49:00 np0005466030 systemd[1]: Starting Switch Root...
Oct  2 06:49:00 np0005466030 systemd[1]: Switching root.
Oct  2 06:49:00 np0005466030 systemd-journald[308]: Journal stopped
Oct  2 06:49:01 np0005466030 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  2 06:49:01 np0005466030 kernel: audit: type=1404 audit(1759402140.441:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  2 06:49:01 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 06:49:01 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 06:49:01 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 06:49:01 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 06:49:01 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 06:49:01 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 06:49:01 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 06:49:01 np0005466030 kernel: audit: type=1403 audit(1759402140.619:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  2 06:49:01 np0005466030 systemd: Successfully loaded SELinux policy in 181.443ms.
Oct  2 06:49:01 np0005466030 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.798ms.
Oct  2 06:49:01 np0005466030 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  2 06:49:01 np0005466030 systemd: Detected virtualization kvm.
Oct  2 06:49:01 np0005466030 systemd: Detected architecture x86-64.
Oct  2 06:49:01 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 06:49:01 np0005466030 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd: Stopped Switch Root.
Oct  2 06:49:01 np0005466030 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  2 06:49:01 np0005466030 systemd: Created slice Slice /system/getty.
Oct  2 06:49:01 np0005466030 systemd: Created slice Slice /system/serial-getty.
Oct  2 06:49:01 np0005466030 systemd: Created slice Slice /system/sshd-keygen.
Oct  2 06:49:01 np0005466030 systemd: Created slice User and Session Slice.
Oct  2 06:49:01 np0005466030 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  2 06:49:01 np0005466030 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  2 06:49:01 np0005466030 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  2 06:49:01 np0005466030 systemd: Reached target Local Encrypted Volumes.
Oct  2 06:49:01 np0005466030 systemd: Stopped target Switch Root.
Oct  2 06:49:01 np0005466030 systemd: Stopped target Initrd File Systems.
Oct  2 06:49:01 np0005466030 systemd: Stopped target Initrd Root File System.
Oct  2 06:49:01 np0005466030 systemd: Reached target Local Integrity Protected Volumes.
Oct  2 06:49:01 np0005466030 systemd: Reached target Path Units.
Oct  2 06:49:01 np0005466030 systemd: Reached target rpc_pipefs.target.
Oct  2 06:49:01 np0005466030 systemd: Reached target Slice Units.
Oct  2 06:49:01 np0005466030 systemd: Reached target Swaps.
Oct  2 06:49:01 np0005466030 systemd: Reached target Local Verity Protected Volumes.
Oct  2 06:49:01 np0005466030 systemd: Listening on RPCbind Server Activation Socket.
Oct  2 06:49:01 np0005466030 systemd: Reached target RPC Port Mapper.
Oct  2 06:49:01 np0005466030 systemd: Listening on Process Core Dump Socket.
Oct  2 06:49:01 np0005466030 systemd: Listening on initctl Compatibility Named Pipe.
Oct  2 06:49:01 np0005466030 systemd: Listening on udev Control Socket.
Oct  2 06:49:01 np0005466030 systemd: Listening on udev Kernel Socket.
Oct  2 06:49:01 np0005466030 systemd: Mounting Huge Pages File System...
Oct  2 06:49:01 np0005466030 systemd: Mounting POSIX Message Queue File System...
Oct  2 06:49:01 np0005466030 systemd: Mounting Kernel Debug File System...
Oct  2 06:49:01 np0005466030 systemd: Mounting Kernel Trace File System...
Oct  2 06:49:01 np0005466030 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 06:49:01 np0005466030 systemd: Starting Create List of Static Device Nodes...
Oct  2 06:49:01 np0005466030 systemd: Starting Load Kernel Module configfs...
Oct  2 06:49:01 np0005466030 systemd: Starting Load Kernel Module drm...
Oct  2 06:49:01 np0005466030 systemd: Starting Load Kernel Module efi_pstore...
Oct  2 06:49:01 np0005466030 systemd: Starting Load Kernel Module fuse...
Oct  2 06:49:01 np0005466030 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  2 06:49:01 np0005466030 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd: Stopped File System Check on Root Device.
Oct  2 06:49:01 np0005466030 systemd: Stopped Journal Service.
Oct  2 06:49:01 np0005466030 systemd: Starting Journal Service...
Oct  2 06:49:01 np0005466030 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  2 06:49:01 np0005466030 systemd: Starting Generate network units from Kernel command line...
Oct  2 06:49:01 np0005466030 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:49:01 np0005466030 systemd: Starting Remount Root and Kernel File Systems...
Oct  2 06:49:01 np0005466030 kernel: fuse: init (API version 7.37)
Oct  2 06:49:01 np0005466030 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  2 06:49:01 np0005466030 systemd: Starting Apply Kernel Variables...
Oct  2 06:49:01 np0005466030 systemd: Starting Coldplug All udev Devices...
Oct  2 06:49:01 np0005466030 systemd: Mounted Huge Pages File System.
Oct  2 06:49:01 np0005466030 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  2 06:49:01 np0005466030 systemd: Mounted POSIX Message Queue File System.
Oct  2 06:49:01 np0005466030 systemd: Mounted Kernel Debug File System.
Oct  2 06:49:01 np0005466030 systemd-journald[681]: Journal started
Oct  2 06:49:01 np0005466030 systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:49:01 np0005466030 systemd[1]: Queued start job for default target Multi-User System.
Oct  2 06:49:01 np0005466030 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd: Started Journal Service.
Oct  2 06:49:01 np0005466030 systemd[1]: Mounted Kernel Trace File System.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Create List of Static Device Nodes.
Oct  2 06:49:01 np0005466030 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:49:01 np0005466030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  2 06:49:01 np0005466030 kernel: ACPI: bus type drm_connector registered
Oct  2 06:49:01 np0005466030 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Load Kernel Module fuse.
Oct  2 06:49:01 np0005466030 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Load Kernel Module drm.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Generate network units from Kernel command line.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Apply Kernel Variables.
Oct  2 06:49:01 np0005466030 systemd[1]: Mounting FUSE Control File System...
Oct  2 06:49:01 np0005466030 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Rebuild Hardware Database...
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  2 06:49:01 np0005466030 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Load/Save OS Random Seed...
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Create System Users...
Oct  2 06:49:01 np0005466030 systemd[1]: Mounted FUSE Control File System.
Oct  2 06:49:01 np0005466030 systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  2 06:49:01 np0005466030 systemd-journald[681]: Received client request to flush runtime journal.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Load/Save OS Random Seed.
Oct  2 06:49:01 np0005466030 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Create System Users.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Coldplug All udev Devices.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  2 06:49:01 np0005466030 systemd[1]: Reached target Preparation for Local File Systems.
Oct  2 06:49:01 np0005466030 systemd[1]: Reached target Local File Systems.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  2 06:49:01 np0005466030 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  2 06:49:01 np0005466030 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  2 06:49:01 np0005466030 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Automatic Boot Loader Update...
Oct  2 06:49:01 np0005466030 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Create Volatile Files and Directories...
Oct  2 06:49:01 np0005466030 bootctl[697]: Couldn't find EFI system partition, skipping.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Automatic Boot Loader Update.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Create Volatile Files and Directories.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Security Auditing Service...
Oct  2 06:49:01 np0005466030 systemd[1]: Starting RPC Bind...
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Rebuild Journal Catalog...
Oct  2 06:49:01 np0005466030 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  2 06:49:01 np0005466030 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Rebuild Journal Catalog.
Oct  2 06:49:01 np0005466030 systemd[1]: Started RPC Bind.
Oct  2 06:49:01 np0005466030 augenrules[709]: /sbin/augenrules: No change
Oct  2 06:49:01 np0005466030 augenrules[725]: No rules
Oct  2 06:49:01 np0005466030 augenrules[725]: enabled 1
Oct  2 06:49:01 np0005466030 augenrules[725]: failure 1
Oct  2 06:49:01 np0005466030 augenrules[725]: pid 703
Oct  2 06:49:01 np0005466030 augenrules[725]: rate_limit 0
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_limit 8192
Oct  2 06:49:01 np0005466030 augenrules[725]: lost 0
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog 4
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_wait_time 60000
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_wait_time_actual 0
Oct  2 06:49:01 np0005466030 augenrules[725]: enabled 1
Oct  2 06:49:01 np0005466030 augenrules[725]: failure 1
Oct  2 06:49:01 np0005466030 augenrules[725]: pid 703
Oct  2 06:49:01 np0005466030 augenrules[725]: rate_limit 0
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_limit 8192
Oct  2 06:49:01 np0005466030 augenrules[725]: lost 0
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog 4
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_wait_time 60000
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_wait_time_actual 0
Oct  2 06:49:01 np0005466030 augenrules[725]: enabled 1
Oct  2 06:49:01 np0005466030 augenrules[725]: failure 1
Oct  2 06:49:01 np0005466030 augenrules[725]: pid 703
Oct  2 06:49:01 np0005466030 augenrules[725]: rate_limit 0
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_limit 8192
Oct  2 06:49:01 np0005466030 augenrules[725]: lost 0
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog 1
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_wait_time 60000
Oct  2 06:49:01 np0005466030 augenrules[725]: backlog_wait_time_actual 0
Oct  2 06:49:01 np0005466030 systemd[1]: Started Security Auditing Service.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Rebuild Hardware Database.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Update is Completed...
Oct  2 06:49:01 np0005466030 systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Update is Completed.
Oct  2 06:49:01 np0005466030 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  2 06:49:01 np0005466030 systemd[1]: Reached target System Initialization.
Oct  2 06:49:01 np0005466030 systemd[1]: Started dnf makecache --timer.
Oct  2 06:49:01 np0005466030 systemd[1]: Started Daily rotation of log files.
Oct  2 06:49:01 np0005466030 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  2 06:49:01 np0005466030 systemd[1]: Reached target Timer Units.
Oct  2 06:49:01 np0005466030 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  2 06:49:01 np0005466030 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  2 06:49:01 np0005466030 systemd[1]: Reached target Socket Units.
Oct  2 06:49:01 np0005466030 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting D-Bus System Message Bus...
Oct  2 06:49:01 np0005466030 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:49:01 np0005466030 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  2 06:49:01 np0005466030 systemd[1]: Starting Load Kernel Module configfs...
Oct  2 06:49:01 np0005466030 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  2 06:49:01 np0005466030 systemd[1]: Finished Load Kernel Module configfs.
Oct  2 06:49:02 np0005466030 systemd[1]: Started D-Bus System Message Bus.
Oct  2 06:49:02 np0005466030 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  2 06:49:02 np0005466030 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  2 06:49:02 np0005466030 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  2 06:49:02 np0005466030 systemd[1]: Reached target Basic System.
Oct  2 06:49:02 np0005466030 dbus-broker-lau[772]: Ready
Oct  2 06:49:02 np0005466030 systemd[1]: Starting NTP client/server...
Oct  2 06:49:02 np0005466030 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  2 06:49:02 np0005466030 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  2 06:49:02 np0005466030 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  2 06:49:02 np0005466030 systemd[1]: Starting IPv4 firewall with iptables...
Oct  2 06:49:02 np0005466030 systemd[1]: Started irqbalance daemon.
Oct  2 06:49:02 np0005466030 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  2 06:49:02 np0005466030 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:49:02 np0005466030 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:49:02 np0005466030 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 06:49:02 np0005466030 systemd[1]: Reached target sshd-keygen.target.
Oct  2 06:49:02 np0005466030 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  2 06:49:02 np0005466030 systemd[1]: Reached target User and Group Name Lookups.
Oct  2 06:49:02 np0005466030 systemd[1]: Starting User Login Management...
Oct  2 06:49:02 np0005466030 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  2 06:49:02 np0005466030 chronyd[802]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 06:49:02 np0005466030 chronyd[802]: Loaded 0 symmetric keys
Oct  2 06:49:02 np0005466030 chronyd[802]: Using right/UTC timezone to obtain leap second data
Oct  2 06:49:02 np0005466030 chronyd[802]: Loaded seccomp filter (level 2)
Oct  2 06:49:02 np0005466030 systemd[1]: Started NTP client/server.
Oct  2 06:49:02 np0005466030 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  2 06:49:02 np0005466030 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  2 06:49:02 np0005466030 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  2 06:49:02 np0005466030 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  2 06:49:02 np0005466030 kernel: Console: switching to colour dummy device 80x25
Oct  2 06:49:02 np0005466030 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  2 06:49:02 np0005466030 kernel: [drm] features: -context_init
Oct  2 06:49:02 np0005466030 kernel: [drm] number of scanouts: 1
Oct  2 06:49:02 np0005466030 kernel: [drm] number of cap sets: 0
Oct  2 06:49:02 np0005466030 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  2 06:49:02 np0005466030 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  2 06:49:02 np0005466030 kernel: Console: switching to colour frame buffer device 128x48
Oct  2 06:49:02 np0005466030 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  2 06:49:02 np0005466030 systemd-logind[795]: New seat seat0.
Oct  2 06:49:02 np0005466030 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 06:49:02 np0005466030 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 06:49:02 np0005466030 kernel: kvm_amd: TSC scaling supported
Oct  2 06:49:02 np0005466030 kernel: kvm_amd: Nested Virtualization enabled
Oct  2 06:49:02 np0005466030 kernel: kvm_amd: Nested Paging enabled
Oct  2 06:49:02 np0005466030 kernel: kvm_amd: LBR virtualization supported
Oct  2 06:49:02 np0005466030 systemd[1]: Started User Login Management.
Oct  2 06:49:02 np0005466030 iptables.init[788]: iptables: Applying firewall rules: [  OK  ]
Oct  2 06:49:02 np0005466030 systemd[1]: Finished IPv4 firewall with iptables.
Oct  2 06:49:02 np0005466030 cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 10:49:02 +0000. Up 6.35 seconds.
Oct  2 06:49:02 np0005466030 systemd[1]: run-cloud\x2dinit-tmp-tmpa7xnznch.mount: Deactivated successfully.
Oct  2 06:49:03 np0005466030 systemd[1]: Starting Hostname Service...
Oct  2 06:49:03 np0005466030 systemd[1]: Started Hostname Service.
Oct  2 06:49:03 np0005466030 systemd-hostnamed[855]: Hostname set to <np0005466030.novalocal> (static)
Oct  2 06:49:03 np0005466030 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  2 06:49:03 np0005466030 systemd[1]: Reached target Preparation for Network.
Oct  2 06:49:03 np0005466030 systemd[1]: Starting Network Manager...
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3117] NetworkManager (version 1.54.1-1.el9) is starting... (boot:37dcc26c-0803-4cf1-8993-af1de3c457fe)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3122] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3283] manager[0x56412ef75080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3323] hostname: hostname: using hostnamed
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3323] hostname: static hostname changed from (none) to "np0005466030.novalocal"
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3327] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3434] manager[0x56412ef75080]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3441] manager[0x56412ef75080]: rfkill: WWAN hardware radio set enabled
Oct  2 06:49:03 np0005466030 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3522] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3523] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3524] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3525] manager: Networking is enabled by state file
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3527] settings: Loaded settings plugin: keyfile (internal)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3559] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3586] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3612] dhcp: init: Using DHCP client 'internal'
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3615] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3627] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3640] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3647] device (lo): Activation: starting connection 'lo' (754f1d75-6208-49e2-9f27-490774a22f8d)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3656] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3659] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 06:49:03 np0005466030 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3688] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3692] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3695] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3697] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3699] device (eth0): carrier: link connected
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3703] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 06:49:03 np0005466030 systemd[1]: Started Network Manager.
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3708] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3714] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3718] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3719] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3720] manager: NetworkManager state is now CONNECTING
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3721] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 06:49:03 np0005466030 systemd[1]: Reached target Network.
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3726] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3729] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 06:49:03 np0005466030 systemd[1]: Starting Network Manager Wait Online...
Oct  2 06:49:03 np0005466030 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  2 06:49:03 np0005466030 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3894] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3896] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 06:49:03 np0005466030 NetworkManager[859]: <info>  [1759402143.3904] device (lo): Activation: successful, device activated.
Oct  2 06:49:03 np0005466030 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  2 06:49:03 np0005466030 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  2 06:49:03 np0005466030 systemd[1]: Reached target NFS client services.
Oct  2 06:49:03 np0005466030 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  2 06:49:03 np0005466030 systemd[1]: Reached target Remote File Systems.
Oct  2 06:49:03 np0005466030 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7850] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7870] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7914] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7954] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7959] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7968] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7977] device (eth0): Activation: successful, device activated.
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7988] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 06:49:04 np0005466030 NetworkManager[859]: <info>  [1759402144.7996] manager: startup complete
Oct  2 06:49:04 np0005466030 systemd[1]: Finished Network Manager Wait Online.
Oct  2 06:49:04 np0005466030 systemd[1]: Starting Cloud-init: Network Stage...
Oct  2 06:49:05 np0005466030 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 10:49:05 +0000. Up 8.69 seconds.
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |  eth0  | True |         38.129.56.3          | 255.255.255.0 | global | fa:16:3e:d4:f1:89 |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fed4:f189/64 |       .       |  link  | fa:16:3e:d4:f1:89 |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Oct  2 06:49:05 np0005466030 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  2 06:49:06 np0005466030 cloud-init[922]: Generating public/private rsa key pair.
Oct  2 06:49:06 np0005466030 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  2 06:49:06 np0005466030 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  2 06:49:06 np0005466030 cloud-init[922]: The key fingerprint is:
Oct  2 06:49:06 np0005466030 cloud-init[922]: SHA256:t/nFIjxCkjlEy1767q/Z/sUxI4tYkU2z6Po7xEc+LwA root@np0005466030.novalocal
Oct  2 06:49:06 np0005466030 cloud-init[922]: The key's randomart image is:
Oct  2 06:49:06 np0005466030 cloud-init[922]: +---[RSA 3072]----+
Oct  2 06:49:06 np0005466030 cloud-init[922]: |      .     o    |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |     o .   = o   |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |      + . + o    |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |     o =E. ..    |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |      B So+o. +  |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |       = *+++= + |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |        =.Booo=  |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |       . =.+.+.  |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |       .=+*+o.   |
Oct  2 06:49:06 np0005466030 cloud-init[922]: +----[SHA256]-----+
Oct  2 06:49:06 np0005466030 cloud-init[922]: Generating public/private ecdsa key pair.
Oct  2 06:49:06 np0005466030 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  2 06:49:06 np0005466030 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  2 06:49:06 np0005466030 cloud-init[922]: The key fingerprint is:
Oct  2 06:49:06 np0005466030 cloud-init[922]: SHA256:4b96B/45TGMKSsePKtdhOJUbt3VgeZRt+3PEukeUr0w root@np0005466030.novalocal
Oct  2 06:49:06 np0005466030 cloud-init[922]: The key's randomart image is:
Oct  2 06:49:06 np0005466030 cloud-init[922]: +---[ECDSA 256]---+
Oct  2 06:49:06 np0005466030 cloud-init[922]: |             o.o |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |            + o o|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |        .. . o oo|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |       .+.. . .o+|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |       +S+ o . +o|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |      + B.o + E.=|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |     . * *.* + +o|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |    . o o =.+.+ .|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |     o...o.oo. . |
Oct  2 06:49:06 np0005466030 cloud-init[922]: +----[SHA256]-----+
Oct  2 06:49:06 np0005466030 cloud-init[922]: Generating public/private ed25519 key pair.
Oct  2 06:49:06 np0005466030 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  2 06:49:06 np0005466030 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  2 06:49:06 np0005466030 cloud-init[922]: The key fingerprint is:
Oct  2 06:49:06 np0005466030 cloud-init[922]: SHA256:1CSznXUUyniTO/9pEw3MgugSNGIawZsVF4ksbCXEv7E root@np0005466030.novalocal
Oct  2 06:49:06 np0005466030 cloud-init[922]: The key's randomart image is:
Oct  2 06:49:06 np0005466030 cloud-init[922]: +--[ED25519 256]--+
Oct  2 06:49:06 np0005466030 cloud-init[922]: | =++ooooo . ..+. |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |  B.*.+  B = +   |
Oct  2 06:49:06 np0005466030 cloud-init[922]: | . X o .o.=.*o   |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |  + o ... ...o+  |
Oct  2 06:49:06 np0005466030 cloud-init[922]: |     + oS   o. ..|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |    E . .    o ..|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |       .      . .|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |               +.|
Oct  2 06:49:06 np0005466030 cloud-init[922]: |              ..o|
Oct  2 06:49:06 np0005466030 cloud-init[922]: +----[SHA256]-----+
Oct  2 06:49:06 np0005466030 systemd[1]: Finished Cloud-init: Network Stage.
Oct  2 06:49:06 np0005466030 systemd[1]: Reached target Cloud-config availability.
Oct  2 06:49:06 np0005466030 systemd[1]: Reached target Network is Online.
Oct  2 06:49:06 np0005466030 systemd[1]: Starting Cloud-init: Config Stage...
Oct  2 06:49:06 np0005466030 systemd[1]: Starting Notify NFS peers of a restart...
Oct  2 06:49:06 np0005466030 systemd[1]: Starting System Logging Service...
Oct  2 06:49:06 np0005466030 systemd[1]: Starting OpenSSH server daemon...
Oct  2 06:49:06 np0005466030 sm-notify[1005]: Version 2.5.4 starting
Oct  2 06:49:06 np0005466030 systemd[1]: Starting Permit User Sessions...
Oct  2 06:49:06 np0005466030 systemd[1]: Started Notify NFS peers of a restart.
Oct  2 06:49:06 np0005466030 systemd[1]: Started OpenSSH server daemon.
Oct  2 06:49:06 np0005466030 systemd[1]: Finished Permit User Sessions.
Oct  2 06:49:06 np0005466030 systemd[1]: Started Command Scheduler.
Oct  2 06:49:06 np0005466030 systemd[1]: Started Getty on tty1.
Oct  2 06:49:06 np0005466030 systemd[1]: Started Serial Getty on ttyS0.
Oct  2 06:49:06 np0005466030 systemd[1]: Reached target Login Prompts.
Oct  2 06:49:06 np0005466030 systemd[1]: Started System Logging Service.
Oct  2 06:49:06 np0005466030 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Oct  2 06:49:06 np0005466030 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  2 06:49:06 np0005466030 systemd[1]: Reached target Multi-User System.
Oct  2 06:49:06 np0005466030 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  2 06:49:06 np0005466030 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  2 06:49:06 np0005466030 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  2 06:49:06 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 06:49:07 np0005466030 cloud-init[1019]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 10:49:07 +0000. Up 10.77 seconds.
Oct  2 06:49:07 np0005466030 systemd[1]: Finished Cloud-init: Config Stage.
Oct  2 06:49:07 np0005466030 systemd[1]: Starting Cloud-init: Final Stage...
Oct  2 06:49:07 np0005466030 cloud-init[1023]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 10:49:07 +0000. Up 11.24 seconds.
Oct  2 06:49:07 np0005466030 cloud-init[1025]: #############################################################
Oct  2 06:49:07 np0005466030 cloud-init[1026]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  2 06:49:07 np0005466030 cloud-init[1028]: 256 SHA256:4b96B/45TGMKSsePKtdhOJUbt3VgeZRt+3PEukeUr0w root@np0005466030.novalocal (ECDSA)
Oct  2 06:49:07 np0005466030 cloud-init[1030]: 256 SHA256:1CSznXUUyniTO/9pEw3MgugSNGIawZsVF4ksbCXEv7E root@np0005466030.novalocal (ED25519)
Oct  2 06:49:07 np0005466030 cloud-init[1032]: 3072 SHA256:t/nFIjxCkjlEy1767q/Z/sUxI4tYkU2z6Po7xEc+LwA root@np0005466030.novalocal (RSA)
Oct  2 06:49:07 np0005466030 cloud-init[1033]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  2 06:49:07 np0005466030 cloud-init[1034]: #############################################################
Oct  2 06:49:07 np0005466030 cloud-init[1023]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 10:49:07 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.44 seconds
Oct  2 06:49:07 np0005466030 systemd[1]: Finished Cloud-init: Final Stage.
Oct  2 06:49:07 np0005466030 systemd[1]: Reached target Cloud-init target.
Oct  2 06:49:07 np0005466030 systemd[1]: Startup finished in 1.552s (kernel) + 2.494s (initrd) + 7.471s (userspace) = 11.518s.
Oct  2 06:49:09 np0005466030 chronyd[802]: Selected source 158.69.247.84 (2.centos.pool.ntp.org)
Oct  2 06:49:09 np0005466030 chronyd[802]: System clock TAI offset set to 37 seconds
Oct  2 06:49:11 np0005466030 chronyd[802]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 35 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 35 affinity is now unmanaged
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 33 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 33 affinity is now unmanaged
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 31 affinity is now unmanaged
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 28 affinity is now unmanaged
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 34 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 34 affinity is now unmanaged
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 32 affinity is now unmanaged
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 30 affinity is now unmanaged
Oct  2 06:49:12 np0005466030 irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  2 06:49:12 np0005466030 irqbalance[793]: IRQ 29 affinity is now unmanaged
Oct  2 06:49:14 np0005466030 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 06:49:33 np0005466030 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:02:12 np0005466030 systemd[1]: Created slice User Slice of UID 1000.
Oct  2 07:02:12 np0005466030 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  2 07:02:12 np0005466030 systemd-logind[795]: New session 1 of user zuul.
Oct  2 07:02:13 np0005466030 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  2 07:02:13 np0005466030 systemd[1]: Starting User Manager for UID 1000...
Oct  2 07:02:13 np0005466030 systemd[1082]: Queued start job for default target Main User Target.
Oct  2 07:02:13 np0005466030 systemd[1082]: Created slice User Application Slice.
Oct  2 07:02:13 np0005466030 systemd[1082]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 07:02:13 np0005466030 systemd[1082]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:02:13 np0005466030 systemd[1082]: Reached target Paths.
Oct  2 07:02:13 np0005466030 systemd[1082]: Reached target Timers.
Oct  2 07:02:13 np0005466030 systemd[1082]: Starting D-Bus User Message Bus Socket...
Oct  2 07:02:13 np0005466030 systemd[1082]: Starting Create User's Volatile Files and Directories...
Oct  2 07:02:13 np0005466030 systemd[1082]: Finished Create User's Volatile Files and Directories.
Oct  2 07:02:13 np0005466030 systemd[1082]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:02:13 np0005466030 systemd[1082]: Reached target Sockets.
Oct  2 07:02:13 np0005466030 systemd[1082]: Reached target Basic System.
Oct  2 07:02:13 np0005466030 systemd[1082]: Reached target Main User Target.
Oct  2 07:02:13 np0005466030 systemd[1082]: Startup finished in 124ms.
Oct  2 07:02:13 np0005466030 systemd[1]: Started User Manager for UID 1000.
Oct  2 07:02:13 np0005466030 systemd[1]: Started Session 1 of User zuul.
Oct  2 07:02:13 np0005466030 python3[1165]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:17 np0005466030 python3[1193]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:25 np0005466030 python3[1251]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:26 np0005466030 python3[1291]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  2 07:02:28 np0005466030 python3[1317]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdHOgImyIPDgNWnaMxITEPAN7NVtxzu14ISD59Z0krS9o0Yef/lJRBJcwAtbdZl6thmmrmd+i6nLhYv58i91I9BglmtPCtwZOV73PkKRHZ//oaGwnMih4wB70pyMygFWOrMfCeHRbPChFn2mwctskvcL515U/KpRwUH6WlesAnHltNt9DFUSKyQADMR0GdPnnDw8gLOq9DBkiwlfGxOV1vxXnsJgtCzmcYqLfOMUyT5CJybnG3mpE2Rfc4aNSBi+3/P2Age5mBEwGZMXQU8BTcxVemx04TNqPzeSvzH96Xtnm6b/EZ1nBpVZVpqJLubsNcY65zoE9DNXQJGgx09voZuQytvk2ksubtwSyX2khxwkaAPUuGWesuCs/pP/g0634ox7wm21U4hFzvMni4TFc4otDkcIsKet/KbBKdvGkk7IVb08Z3k8S96poyWuD8sK4zHLKur4EKbCU4aodgLm2RXTqJN6pLISaY3GAnRN94PvuTmeqA+tMo1IfiAgcif0k= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:28 np0005466030 python3[1341]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:29 np0005466030 python3[1440]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:29 np0005466030 python3[1511]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402949.0350459-253-211783638593079/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=e1f611fafbac4ef993faa9123ba23e77_id_rsa follow=False checksum=923ba278c698bf654f2c8fd44aaead32908a4e27 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:30 np0005466030 python3[1634]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:30 np0005466030 python3[1705]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402950.114852-307-185486104353466/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=e1f611fafbac4ef993faa9123ba23e77_id_rsa.pub follow=False checksum=9747c9704720df2c89f1c3bf3782f9b9dd59b88f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:32 np0005466030 python3[1753]: ansible-ping Invoked with data=pong
Oct  2 07:02:33 np0005466030 python3[1777]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:02:36 np0005466030 python3[1835]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  2 07:02:37 np0005466030 python3[1867]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:37 np0005466030 python3[1891]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:38 np0005466030 python3[1915]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:38 np0005466030 python3[1939]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:38 np0005466030 python3[1963]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:38 np0005466030 python3[1987]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:40 np0005466030 python3[2013]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:41 np0005466030 python3[2091]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:41 np0005466030 python3[2164]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402960.9564967-32-186289017366440/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:42 np0005466030 python3[2212]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:42 np0005466030 python3[2236]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:43 np0005466030 python3[2260]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:43 np0005466030 python3[2284]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:43 np0005466030 python3[2308]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:43 np0005466030 python3[2332]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:44 np0005466030 python3[2356]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:44 np0005466030 python3[2380]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:44 np0005466030 python3[2404]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:45 np0005466030 python3[2428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:45 np0005466030 python3[2452]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:45 np0005466030 python3[2476]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:45 np0005466030 python3[2500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:46 np0005466030 python3[2524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:46 np0005466030 python3[2548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:46 np0005466030 python3[2572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466030 python3[2596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466030 python3[2620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466030 python3[2644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:47 np0005466030 python3[2668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:48 np0005466030 python3[2692]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:48 np0005466030 python3[2716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:48 np0005466030 python3[2740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:49 np0005466030 python3[2764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:49 np0005466030 python3[2788]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:49 np0005466030 python3[2812]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:02:52 np0005466030 python3[2838]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:02:52 np0005466030 systemd[1]: Starting Time & Date Service...
Oct  2 07:02:52 np0005466030 systemd[1]: Started Time & Date Service.
Oct  2 07:02:52 np0005466030 systemd-timedated[2840]: Changed time zone to 'UTC' (UTC).
Oct  2 07:02:52 np0005466030 python3[2869]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:53 np0005466030 python3[2945]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:53 np0005466030 python3[3016]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759402973.1979249-252-26345957052802/source _original_basename=tmpaz4hyegb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:54 np0005466030 python3[3116]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:54 np0005466030 python3[3187]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402974.0780747-302-70111321585757/source _original_basename=tmpc70jnsba follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:55 np0005466030 python3[3289]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:56 np0005466030 python3[3362]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402975.2202332-382-182090364043033/source _original_basename=tmp_p7d9vbr follow=False checksum=543712d4d707f51827e90f243e5a01210e719e2f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:56 np0005466030 python3[3410]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:02:56 np0005466030 python3[3436]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:02:57 np0005466030 python3[3516]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:02:57 np0005466030 python3[3589]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402977.047506-453-154571928777526/source _original_basename=tmp4knax7b7 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:02:58 np0005466030 python3[3640]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-634d-add4-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:02:59 np0005466030 python3[3668]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-634d-add4-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  2 07:03:00 np0005466030 python3[3696]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:03:22 np0005466030 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:03:28 np0005466030 python3[3724]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:04:10 np0005466030 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  2 07:04:10 np0005466030 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  2 07:04:10 np0005466030 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  2 07:04:10 np0005466030 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  2 07:04:28 np0005466030 systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  2 07:04:34 np0005466030 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  2 07:04:34 np0005466030 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.0865] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 07:04:34 np0005466030 systemd-udevd[3729]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1041] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1070] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1075] device (eth1): carrier: link connected
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1077] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1084] policy: auto-activating connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968)
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1089] device (eth1): Activation: starting connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968)
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1090] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1093] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1098] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:04:34 np0005466030 NetworkManager[859]: <info>  [1759403074.1103] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:04:34 np0005466030 systemd[1082]: Starting Mark boot as successful...
Oct  2 07:04:34 np0005466030 systemd[1082]: Finished Mark boot as successful.
Oct  2 07:04:35 np0005466030 systemd-logind[795]: New session 3 of user zuul.
Oct  2 07:04:35 np0005466030 systemd[1]: Started Session 3 of User zuul.
Oct  2 07:04:35 np0005466030 python3[3761]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-98ee-a261-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:04:45 np0005466030 python3[3844]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:04:45 np0005466030 python3[3917]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759403085.1418831-155-105764592428768/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7f4e5c46bae12badf8ec6f27cb3609054cb28e27 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:04:46 np0005466030 python3[3967]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:04:46 np0005466030 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 07:04:46 np0005466030 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 07:04:46 np0005466030 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.4966] caught SIGTERM, shutting down normally.
Oct  2 07:04:46 np0005466030 systemd[1]: Stopping Network Manager...
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.4985] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.4985] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.4986] dhcp4 (eth0): state changed no lease
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.4989] manager: NetworkManager state is now CONNECTING
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.5083] dhcp4 (eth1): canceled DHCP transaction
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.5084] dhcp4 (eth1): state changed no lease
Oct  2 07:04:46 np0005466030 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:04:46 np0005466030 NetworkManager[859]: <info>  [1759403086.5139] exiting (success)
Oct  2 07:04:46 np0005466030 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:04:46 np0005466030 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 07:04:46 np0005466030 systemd[1]: Stopped Network Manager.
Oct  2 07:04:46 np0005466030 systemd[1]: NetworkManager.service: Consumed 5.370s CPU time, 10.1M memory peak.
Oct  2 07:04:46 np0005466030 systemd[1]: Starting Network Manager...
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.5987] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:37dcc26c-0803-4cf1-8993-af1de3c457fe)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.5989] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6065] manager[0x561dc9c36070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 07:04:46 np0005466030 systemd[1]: Starting Hostname Service...
Oct  2 07:04:46 np0005466030 systemd[1]: Started Hostname Service.
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6804] hostname: hostname: using hostnamed
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6808] hostname: static hostname changed from (none) to "np0005466030.novalocal"
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6814] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6820] manager[0x561dc9c36070]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6821] manager[0x561dc9c36070]: rfkill: WWAN hardware radio set enabled
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6863] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6863] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6864] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6865] manager: Networking is enabled by state file
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6869] settings: Loaded settings plugin: keyfile (internal)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6875] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6909] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6921] dhcp: init: Using DHCP client 'internal'
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6925] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6933] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6941] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6953] device (lo): Activation: starting connection 'lo' (754f1d75-6208-49e2-9f27-490774a22f8d)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6964] device (eth0): carrier: link connected
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6970] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6978] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6979] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6988] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.6999] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7008] device (eth1): carrier: link connected
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7015] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7024] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968) (indicated)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7025] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7032] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7041] device (eth1): Activation: starting connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968)
Oct  2 07:04:46 np0005466030 systemd[1]: Started Network Manager.
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7053] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7059] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7064] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7068] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7073] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7089] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7094] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7097] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7101] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7112] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7117] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7134] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7139] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7158] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7162] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7169] device (lo): Activation: successful, device activated.
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7187] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7194] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 07:04:46 np0005466030 systemd[1]: Starting Network Manager Wait Online...
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7260] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7275] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7277] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7281] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7284] device (eth0): Activation: successful, device activated.
Oct  2 07:04:46 np0005466030 NetworkManager[3976]: <info>  [1759403086.7290] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 07:04:47 np0005466030 python3[4052]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-98ee-a261-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:04:56 np0005466030 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:05:16 np0005466030 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.3830] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:05:32 np0005466030 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:05:32 np0005466030 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4092] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4095] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4104] device (eth1): Activation: successful, device activated.
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4111] manager: startup complete
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4113] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <warn>  [1759403132.4118] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4128] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  2 07:05:32 np0005466030 systemd[1]: Finished Network Manager Wait Online.
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4243] dhcp4 (eth1): canceled DHCP transaction
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4244] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4244] dhcp4 (eth1): state changed no lease
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4256] policy: auto-activating connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4260] device (eth1): Activation: starting connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4261] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4264] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4271] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4279] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4324] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4325] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:05:32 np0005466030 NetworkManager[3976]: <info>  [1759403132.4331] device (eth1): Activation: successful, device activated.
Oct  2 07:05:42 np0005466030 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:05:47 np0005466030 systemd[1]: session-3.scope: Deactivated successfully.
Oct  2 07:05:47 np0005466030 systemd[1]: session-3.scope: Consumed 1.726s CPU time.
Oct  2 07:05:47 np0005466030 systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Oct  2 07:05:47 np0005466030 systemd-logind[795]: Removed session 3.
Oct  2 07:06:27 np0005466030 systemd-logind[795]: New session 4 of user zuul.
Oct  2 07:06:27 np0005466030 systemd[1]: Started Session 4 of User zuul.
Oct  2 07:06:28 np0005466030 python3[4163]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:06:28 np0005466030 python3[4236]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759403187.7121105-373-261862792232779/source _original_basename=tmpembgra3w follow=False checksum=c919e886c60bd4fe64e018977b3d3fbde98f63d3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:06:31 np0005466030 systemd[1]: session-4.scope: Deactivated successfully.
Oct  2 07:06:31 np0005466030 systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Oct  2 07:06:31 np0005466030 systemd-logind[795]: Removed session 4.
Oct  2 07:08:10 np0005466030 systemd[1082]: Created slice User Background Tasks Slice.
Oct  2 07:08:10 np0005466030 systemd[1082]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 07:08:10 np0005466030 systemd[1082]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 07:12:53 np0005466030 systemd-logind[795]: New session 5 of user zuul.
Oct  2 07:12:53 np0005466030 systemd[1]: Started Session 5 of User zuul.
Oct  2 07:12:54 np0005466030 python3[4295]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-383d-cd01-000000000cac-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:12:54 np0005466030 python3[4324]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:55 np0005466030 python3[4350]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:55 np0005466030 python3[4376]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:55 np0005466030 python3[4402]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:56 np0005466030 python3[4428]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:12:56 np0005466030 python3[4428]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  2 07:12:56 np0005466030 python3[4454]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:12:56 np0005466030 systemd[1]: Reloading.
Oct  2 07:12:56 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:12:57 np0005466030 systemd[1]: Starting dnf makecache...
Oct  2 07:12:58 np0005466030 dnf[4485]: Failed determining last makecache time.
Oct  2 07:12:58 np0005466030 python3[4511]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  2 07:12:58 np0005466030 dnf[4485]: CentOS Stream 9 - BaseOS                         41 kB/s | 6.7 kB     00:00
Oct  2 07:12:59 np0005466030 python3[4543]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:12:59 np0005466030 dnf[4485]: CentOS Stream 9 - AppStream                      74 kB/s | 6.8 kB     00:00
Oct  2 07:12:59 np0005466030 python3[4572]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:12:59 np0005466030 python3[4600]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:13:00 np0005466030 dnf[4485]: CentOS Stream 9 - CRB                            71 kB/s | 6.6 kB     00:00
Oct  2 07:13:00 np0005466030 python3[4628]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:13:00 np0005466030 dnf[4485]: CentOS Stream 9 - Extras packages                79 kB/s | 8.0 kB     00:00
Oct  2 07:13:00 np0005466030 dnf[4485]: Metadata cache created.
Oct  2 07:13:00 np0005466030 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 07:13:00 np0005466030 systemd[1]: Finished dnf makecache.
Oct  2 07:13:00 np0005466030 python3[4657]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-383d-cd01-000000000cb2-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:13:01 np0005466030 python3[4688]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:13:04 np0005466030 systemd[1]: session-5.scope: Deactivated successfully.
Oct  2 07:13:04 np0005466030 systemd[1]: session-5.scope: Consumed 3.506s CPU time.
Oct  2 07:13:04 np0005466030 systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Oct  2 07:13:04 np0005466030 systemd-logind[795]: Removed session 5.
Oct  2 07:13:05 np0005466030 systemd-logind[795]: New session 6 of user zuul.
Oct  2 07:13:05 np0005466030 systemd[1]: Started Session 6 of User zuul.
Oct  2 07:13:06 np0005466030 python3[4721]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 07:13:34 np0005466030 kernel: SELinux:  Converting 365 SID table entries...
Oct  2 07:13:35 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:13:35 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:13:35 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:13:35 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:13:35 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:13:35 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:13:35 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:13:50 np0005466030 kernel: SELinux:  Converting 365 SID table entries...
Oct  2 07:13:50 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:13:50 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:13:50 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:13:50 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:13:50 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:13:50 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:13:50 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:14:01 np0005466030 kernel: SELinux:  Converting 365 SID table entries...
Oct  2 07:14:01 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:14:01 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:14:01 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:14:01 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:14:01 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:14:01 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:14:01 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:14:03 np0005466030 setsebool[4784]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  2 07:14:03 np0005466030 setsebool[4784]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  2 07:14:14 np0005466030 kernel: SELinux:  Converting 368 SID table entries...
Oct  2 07:14:14 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:14:14 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:14:14 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:14:14 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:14:14 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:14:14 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:14:14 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:14:38 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 07:14:38 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:14:38 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:14:38 np0005466030 systemd[1]: Reloading.
Oct  2 07:14:39 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:14:39 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:14:41 np0005466030 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:14:41 np0005466030 systemd[1]: Starting Authorization Manager...
Oct  2 07:14:41 np0005466030 polkitd[6956]: Started polkitd version 0.117
Oct  2 07:14:42 np0005466030 systemd[1]: Started Authorization Manager.
Oct  2 07:14:42 np0005466030 systemd[1]: Started PackageKit Daemon.
Oct  2 07:14:43 np0005466030 python3[8046]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-52da-2fd4-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:14:44 np0005466030 kernel: evm: overlay not supported
Oct  2 07:14:44 np0005466030 systemd[1082]: Starting D-Bus User Message Bus...
Oct  2 07:14:44 np0005466030 dbus-broker-launch[8913]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  2 07:14:44 np0005466030 dbus-broker-launch[8913]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  2 07:14:44 np0005466030 systemd[1082]: Started D-Bus User Message Bus.
Oct  2 07:14:44 np0005466030 dbus-broker-lau[8913]: Ready
Oct  2 07:14:44 np0005466030 systemd[1082]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  2 07:14:44 np0005466030 systemd[1082]: Created slice Slice /user.
Oct  2 07:14:44 np0005466030 systemd[1082]: podman-8791.scope: unit configures an IP firewall, but not running as root.
Oct  2 07:14:44 np0005466030 systemd[1082]: (This warning is only shown for the first unit using IP firewalling.)
Oct  2 07:14:44 np0005466030 systemd[1082]: Started podman-8791.scope.
Oct  2 07:14:45 np0005466030 systemd[1082]: Started podman-pause-72f6f642.scope.
Oct  2 07:14:45 np0005466030 python3[9591]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.136:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.136:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:14:46 np0005466030 systemd[1]: session-6.scope: Deactivated successfully.
Oct  2 07:14:46 np0005466030 systemd[1]: session-6.scope: Consumed 1min 4.399s CPU time.
Oct  2 07:14:46 np0005466030 systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Oct  2 07:14:46 np0005466030 systemd-logind[795]: Removed session 6.
Oct  2 07:15:11 np0005466030 systemd-logind[795]: New session 7 of user zuul.
Oct  2 07:15:11 np0005466030 systemd[1]: Started Session 7 of User zuul.
Oct  2 07:15:11 np0005466030 python3[17889]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:15:11 np0005466030 python3[18033]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:15:12 np0005466030 python3[18319]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005466030.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  2 07:15:13 np0005466030 python3[18581]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  2 07:15:14 np0005466030 python3[18830]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:15:14 np0005466030 python3[19101]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759403713.8318872-169-209930373179287/source _original_basename=tmpkuinfmsf follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:15:15 np0005466030 python3[19417]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Oct  2 07:15:15 np0005466030 systemd[1]: Starting Hostname Service...
Oct  2 07:15:15 np0005466030 systemd[1]: Started Hostname Service.
Oct  2 07:15:15 np0005466030 systemd-hostnamed[19531]: Changed pretty hostname to 'compute-1'
Oct  2 07:15:15 np0005466030 systemd-hostnamed[19531]: Hostname set to <compute-1> (static)
Oct  2 07:15:15 np0005466030 NetworkManager[3976]: <info>  [1759403715.6796] hostname: static hostname changed from "np0005466030.novalocal" to "compute-1"
Oct  2 07:15:15 np0005466030 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:15:15 np0005466030 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:15:15 np0005466030 systemd[1]: session-7.scope: Deactivated successfully.
Oct  2 07:15:15 np0005466030 systemd[1]: session-7.scope: Consumed 2.360s CPU time.
Oct  2 07:15:15 np0005466030 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Oct  2 07:15:15 np0005466030 systemd-logind[795]: Removed session 7.
Oct  2 07:15:25 np0005466030 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:15:40 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:15:40 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:15:40 np0005466030 systemd[1]: man-db-cache-update.service: Consumed 1min 4.418s CPU time.
Oct  2 07:15:40 np0005466030 systemd[1]: run-rf7cbedc196ca45e1a75ea832b5edd2bc.service: Deactivated successfully.
Oct  2 07:15:45 np0005466030 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:19:14 np0005466030 systemd-logind[795]: New session 8 of user zuul.
Oct  2 07:19:14 np0005466030 systemd[1]: Started Session 8 of User zuul.
Oct  2 07:19:15 np0005466030 python3[26692]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:19:16 np0005466030 python3[26808]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:17 np0005466030 python3[26881]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:17 np0005466030 python3[26907]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:18 np0005466030 python3[26980]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:18 np0005466030 python3[27006]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:18 np0005466030 python3[27079]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:18 np0005466030 python3[27105]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:19 np0005466030 python3[27178]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:19 np0005466030 python3[27204]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:19 np0005466030 python3[27277]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:20 np0005466030 python3[27303]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:20 np0005466030 python3[27376]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:20 np0005466030 python3[27402]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:19:21 np0005466030 python3[27475]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:19:33 np0005466030 python3[27523]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:19:47 np0005466030 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 07:24:32 np0005466030 systemd[1]: session-8.scope: Deactivated successfully.
Oct  2 07:24:32 np0005466030 systemd[1]: session-8.scope: Consumed 5.101s CPU time.
Oct  2 07:24:32 np0005466030 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Oct  2 07:24:32 np0005466030 systemd-logind[795]: Removed session 8.
Oct  2 07:34:07 np0005466030 systemd-logind[795]: New session 9 of user zuul.
Oct  2 07:34:07 np0005466030 systemd[1]: Started Session 9 of User zuul.
Oct  2 07:34:08 np0005466030 python3.9[27686]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:09 np0005466030 python3.9[27867]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:34:18 np0005466030 systemd[1]: session-9.scope: Deactivated successfully.
Oct  2 07:34:18 np0005466030 systemd[1]: session-9.scope: Consumed 8.988s CPU time.
Oct  2 07:34:18 np0005466030 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Oct  2 07:34:18 np0005466030 systemd-logind[795]: Removed session 9.
Oct  2 07:34:33 np0005466030 systemd-logind[795]: New session 10 of user zuul.
Oct  2 07:34:33 np0005466030 systemd[1]: Started Session 10 of User zuul.
Oct  2 07:34:34 np0005466030 python3.9[28077]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 07:34:35 np0005466030 python3.9[28251]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:36 np0005466030 python3.9[28403]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:34:37 np0005466030 python3.9[28556]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:34:38 np0005466030 python3.9[28708]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:34:39 np0005466030 python3.9[28860]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:34:39 np0005466030 python3.9[28983]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404878.6571164-183-36581701069383/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:34:40 np0005466030 python3.9[29135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:41 np0005466030 python3.9[29291]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:34:42 np0005466030 python3.9[29441]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:34:48 np0005466030 python3.9[29696]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:34:49 np0005466030 python3.9[29846]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:50 np0005466030 python3.9[30000]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:34:51 np0005466030 python3.9[30158]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:34:52 np0005466030 python3.9[30242]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:35:38 np0005466030 systemd[1]: Reloading.
Oct  2 07:35:38 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:38 np0005466030 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  2 07:35:38 np0005466030 systemd[1]: Reloading.
Oct  2 07:35:38 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:39 np0005466030 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  2 07:35:39 np0005466030 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  2 07:35:39 np0005466030 systemd[1]: Reloading.
Oct  2 07:35:39 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:35:39 np0005466030 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  2 07:35:39 np0005466030 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  2 07:35:39 np0005466030 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  2 07:35:39 np0005466030 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  2 07:36:51 np0005466030 kernel: SELinux:  Converting 2713 SID table entries...
Oct  2 07:36:51 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:36:51 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:36:51 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:36:51 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:36:51 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:36:51 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:36:51 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:36:51 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  2 07:36:51 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:36:51 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:36:51 np0005466030 systemd[1]: Reloading.
Oct  2 07:36:51 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:36:51 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:36:51 np0005466030 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:36:52 np0005466030 systemd[1]: Started PackageKit Daemon.
Oct  2 07:36:52 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:36:52 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:36:52 np0005466030 systemd[1]: man-db-cache-update.service: Consumed 1.251s CPU time.
Oct  2 07:36:52 np0005466030 systemd[1]: run-r46e943053d2a44bf92f9795910d703a1.service: Deactivated successfully.
Oct  2 07:36:52 np0005466030 python3.9[31760]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:36:55 np0005466030 python3.9[32041]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 07:36:56 np0005466030 python3.9[32193]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 07:36:59 np0005466030 python3.9[32347]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:37:00 np0005466030 python3.9[32499]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 07:37:01 np0005466030 python3.9[32651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:02 np0005466030 python3.9[32803]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:02 np0005466030 python3.9[32926]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405021.9074333-645-204581355465243/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:37:07 np0005466030 python3.9[33079]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 07:37:08 np0005466030 python3.9[33232]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:37:08 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:37:09 np0005466030 python3.9[33391]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:37:10 np0005466030 python3.9[33551]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 07:37:11 np0005466030 python3.9[33704]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:37:12 np0005466030 python3.9[33862]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 07:37:13 np0005466030 python3.9[34014]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:37:15 np0005466030 python3.9[34167]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:16 np0005466030 python3.9[34319]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:17 np0005466030 python3.9[34442]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405036.151181-930-18834669000000/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:18 np0005466030 python3.9[34594]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:37:18 np0005466030 systemd[1]: Starting Load Kernel Modules...
Oct  2 07:37:18 np0005466030 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  2 07:37:18 np0005466030 kernel: Bridge firewalling registered
Oct  2 07:37:18 np0005466030 systemd-modules-load[34598]: Inserted module 'br_netfilter'
Oct  2 07:37:18 np0005466030 systemd[1]: Finished Load Kernel Modules.
Oct  2 07:37:18 np0005466030 python3.9[34753]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:37:19 np0005466030 python3.9[34876]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405038.5456245-999-86428888645326/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:37:20 np0005466030 python3.9[35028]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:37:24 np0005466030 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  2 07:37:24 np0005466030 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  2 07:37:24 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:37:24 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:37:24 np0005466030 systemd[1]: Reloading.
Oct  2 07:37:24 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:37:25 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:37:26 np0005466030 python3.9[36372]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:37:27 np0005466030 python3.9[37471]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 07:37:28 np0005466030 python3.9[38269]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:37:28 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:37:28 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:37:28 np0005466030 systemd[1]: man-db-cache-update.service: Consumed 4.685s CPU time.
Oct  2 07:37:28 np0005466030 systemd[1]: run-re2169240e7bd48b4b6cf77e63f710d3a.service: Deactivated successfully.
Oct  2 07:37:29 np0005466030 python3.9[39233]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:29 np0005466030 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:37:29 np0005466030 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:37:31 np0005466030 python3.9[39606]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:31 np0005466030 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 07:37:31 np0005466030 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 07:37:31 np0005466030 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 07:37:31 np0005466030 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:37:31 np0005466030 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:37:32 np0005466030 python3.9[39767]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 07:37:35 np0005466030 python3.9[39919]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:35 np0005466030 systemd[1]: Reloading.
Oct  2 07:37:35 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:37:36 np0005466030 python3.9[40107]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:37:36 np0005466030 systemd[1]: Reloading.
Oct  2 07:37:36 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:37:37 np0005466030 python3.9[40297]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:38 np0005466030 python3.9[40450]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:38 np0005466030 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  2 07:37:39 np0005466030 python3.9[40603]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:42 np0005466030 python3.9[40765]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:37:43 np0005466030 python3.9[40918]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:37:43 np0005466030 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  2 07:37:43 np0005466030 systemd[1]: Stopped Apply Kernel Variables.
Oct  2 07:37:43 np0005466030 systemd[1]: Stopping Apply Kernel Variables...
Oct  2 07:37:43 np0005466030 systemd[1]: Starting Apply Kernel Variables...
Oct  2 07:37:43 np0005466030 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  2 07:37:43 np0005466030 systemd[1]: Finished Apply Kernel Variables.
Oct  2 07:37:44 np0005466030 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Oct  2 07:37:44 np0005466030 systemd[1]: session-10.scope: Deactivated successfully.
Oct  2 07:37:44 np0005466030 systemd[1]: session-10.scope: Consumed 2min 13.150s CPU time.
Oct  2 07:37:44 np0005466030 systemd-logind[795]: Removed session 10.
Oct  2 07:37:50 np0005466030 systemd-logind[795]: New session 11 of user zuul.
Oct  2 07:37:50 np0005466030 systemd[1]: Started Session 11 of User zuul.
Oct  2 07:37:51 np0005466030 python3.9[41101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:37:53 np0005466030 python3.9[41257]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 07:37:54 np0005466030 python3.9[41410]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:37:55 np0005466030 python3.9[41568]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:37:56 np0005466030 python3.9[41728]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:37:57 np0005466030 python3.9[41812]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:38:00 np0005466030 python3.9[41976]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:14 np0005466030 kernel: SELinux:  Converting 2723 SID table entries...
Oct  2 07:38:14 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:38:14 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:38:14 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:38:14 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:38:14 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:38:14 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:38:14 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:38:14 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  2 07:38:14 np0005466030 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  2 07:38:16 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:38:16 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:38:16 np0005466030 systemd[1]: Reloading.
Oct  2 07:38:16 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:16 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:16 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:38:18 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:38:18 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:38:18 np0005466030 systemd[1]: run-re67a3717daa64f18ab4f6825ac56389f.service: Deactivated successfully.
Oct  2 07:38:19 np0005466030 python3.9[43079]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:38:19 np0005466030 systemd[1]: Reloading.
Oct  2 07:38:19 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:19 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:19 np0005466030 systemd[1]: Starting Open vSwitch Database Unit...
Oct  2 07:38:19 np0005466030 chown[43121]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  2 07:38:19 np0005466030 ovs-ctl[43126]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  2 07:38:19 np0005466030 ovs-ctl[43126]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  2 07:38:19 np0005466030 ovs-ctl[43126]: Starting ovsdb-server [  OK  ]
Oct  2 07:38:19 np0005466030 ovs-vsctl[43175]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  2 07:38:19 np0005466030 ovs-vsctl[43193]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"db222192-8da1-4f7c-972d-dc680c3e6630\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  2 07:38:19 np0005466030 ovs-ctl[43126]: Configuring Open vSwitch system IDs [  OK  ]
Oct  2 07:38:19 np0005466030 ovs-ctl[43126]: Enabling remote OVSDB managers [  OK  ]
Oct  2 07:38:19 np0005466030 systemd[1]: Started Open vSwitch Database Unit.
Oct  2 07:38:19 np0005466030 ovs-vsctl[43201]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  2 07:38:19 np0005466030 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  2 07:38:19 np0005466030 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  2 07:38:19 np0005466030 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  2 07:38:19 np0005466030 kernel: openvswitch: Open vSwitch switching datapath
Oct  2 07:38:19 np0005466030 ovs-ctl[43246]: Inserting openvswitch module [  OK  ]
Oct  2 07:38:19 np0005466030 ovs-ctl[43215]: Starting ovs-vswitchd [  OK  ]
Oct  2 07:38:19 np0005466030 ovs-vsctl[43263]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  2 07:38:19 np0005466030 ovs-ctl[43215]: Enabling remote OVSDB managers [  OK  ]
Oct  2 07:38:19 np0005466030 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  2 07:38:19 np0005466030 systemd[1]: Starting Open vSwitch...
Oct  2 07:38:19 np0005466030 systemd[1]: Finished Open vSwitch.
Oct  2 07:38:20 np0005466030 python3.9[43415]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:21 np0005466030 python3.9[43567]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 07:38:23 np0005466030 kernel: SELinux:  Converting 2737 SID table entries...
Oct  2 07:38:23 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:38:23 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:38:23 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:38:23 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:38:23 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:38:23 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:38:23 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:38:24 np0005466030 python3.9[43722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:38:25 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  2 07:38:25 np0005466030 python3.9[43880]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:27 np0005466030 python3.9[44033]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:38:29 np0005466030 python3.9[44320]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:38:30 np0005466030 python3.9[44470]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:38:30 np0005466030 python3.9[44624]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:33 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:38:33 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:38:33 np0005466030 systemd[1]: Reloading.
Oct  2 07:38:33 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:33 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:33 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:38:33 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:38:33 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:38:33 np0005466030 systemd[1]: run-re57caa1180704f14b823766298bd8525.service: Deactivated successfully.
Oct  2 07:38:34 np0005466030 python3.9[44942]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:38:34 np0005466030 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  2 07:38:34 np0005466030 systemd[1]: Stopped Network Manager Wait Online.
Oct  2 07:38:34 np0005466030 systemd[1]: Stopping Network Manager Wait Online...
Oct  2 07:38:34 np0005466030 systemd[1]: Stopping Network Manager...
Oct  2 07:38:34 np0005466030 NetworkManager[3976]: <info>  [1759405114.7080] caught SIGTERM, shutting down normally.
Oct  2 07:38:34 np0005466030 NetworkManager[3976]: <info>  [1759405114.7093] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:38:34 np0005466030 NetworkManager[3976]: <info>  [1759405114.7093] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:38:34 np0005466030 NetworkManager[3976]: <info>  [1759405114.7093] dhcp4 (eth0): state changed no lease
Oct  2 07:38:34 np0005466030 NetworkManager[3976]: <info>  [1759405114.7095] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:38:34 np0005466030 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:38:34 np0005466030 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:38:34 np0005466030 NetworkManager[3976]: <info>  [1759405114.7657] exiting (success)
Oct  2 07:38:34 np0005466030 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  2 07:38:34 np0005466030 systemd[1]: Stopped Network Manager.
Oct  2 07:38:34 np0005466030 systemd[1]: NetworkManager.service: Consumed 13.780s CPU time, 4.1M memory peak, read 0B from disk, written 35.0K to disk.
Oct  2 07:38:34 np0005466030 systemd[1]: Starting Network Manager...
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.8328] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:37dcc26c-0803-4cf1-8993-af1de3c457fe)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.8330] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.8381] manager[0x55ce296ae090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  2 07:38:34 np0005466030 systemd[1]: Starting Hostname Service...
Oct  2 07:38:34 np0005466030 systemd[1]: Started Hostname Service.
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9223] hostname: hostname: using hostnamed
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9224] hostname: static hostname changed from (none) to "compute-1"
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9230] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9234] manager[0x55ce296ae090]: rfkill: Wi-Fi hardware radio set enabled
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9235] manager[0x55ce296ae090]: rfkill: WWAN hardware radio set enabled
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9258] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9269] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9269] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9270] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9270] manager: Networking is enabled by state file
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9273] settings: Loaded settings plugin: keyfile (internal)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9277] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9306] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9317] dhcp: init: Using DHCP client 'internal'
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9320] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9326] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9332] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9342] device (lo): Activation: starting connection 'lo' (754f1d75-6208-49e2-9f27-490774a22f8d)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9351] device (eth0): carrier: link connected
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9356] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9367] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9368] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9374] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9379] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9385] device (eth1): carrier: link connected
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9389] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9392] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472) (indicated)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9392] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9396] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9402] device (eth1): Activation: starting connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct  2 07:38:34 np0005466030 systemd[1]: Started Network Manager.
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9412] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9427] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9429] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9430] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9432] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9436] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9438] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9444] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9459] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9467] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9470] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9486] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 systemd[1]: Starting Network Manager Wait Online...
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9518] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9531] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9534] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9537] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9548] device (lo): Activation: successful, device activated.
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9560] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9791] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9802] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9804] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9808] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9811] device (eth1): Activation: successful, device activated.
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9872] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9874] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9878] manager: NetworkManager state is now CONNECTED_SITE
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9882] device (eth0): Activation: successful, device activated.
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9886] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  2 07:38:34 np0005466030 NetworkManager[44960]: <info>  [1759405114.9889] manager: startup complete
Oct  2 07:38:34 np0005466030 systemd[1]: Finished Network Manager Wait Online.
Oct  2 07:38:35 np0005466030 python3.9[45168]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:38:45 np0005466030 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:38:45 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:38:45 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:38:45 np0005466030 systemd[1]: Reloading.
Oct  2 07:38:45 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:38:45 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:38:45 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:38:46 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:38:46 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:38:46 np0005466030 systemd[1]: run-r3fbef32659bd4c70875eb59f7a697a60.service: Deactivated successfully.
Oct  2 07:38:47 np0005466030 python3.9[45630]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:38:48 np0005466030 python3.9[45782]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:49 np0005466030 python3.9[45936]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:49 np0005466030 python3.9[46088]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:50 np0005466030 python3.9[46240]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:51 np0005466030 python3.9[46392]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:51 np0005466030 python3.9[46544]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:38:52 np0005466030 python3.9[46667]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405131.198894-653-279225655378820/.source _original_basename=.jrqkg2q8 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:53 np0005466030 python3.9[46819]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:53 np0005466030 python3.9[46971]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  2 07:38:54 np0005466030 python3.9[47123]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:38:56 np0005466030 python3.9[47550]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  2 07:38:57 np0005466030 ansible-async_wrapper.py[47725]: Invoked with j387925174704 300 /home/zuul/.ansible/tmp/ansible-tmp-1759405136.967407-851-125005158112260/AnsiballZ_edpm_os_net_config.py _
Oct  2 07:38:57 np0005466030 ansible-async_wrapper.py[47728]: Starting module and watcher
Oct  2 07:38:57 np0005466030 ansible-async_wrapper.py[47728]: Start watching 47729 (300)
Oct  2 07:38:57 np0005466030 ansible-async_wrapper.py[47729]: Start module (47729)
Oct  2 07:38:57 np0005466030 ansible-async_wrapper.py[47725]: Return async_wrapper task started.
Oct  2 07:38:58 np0005466030 python3.9[47730]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  2 07:38:58 np0005466030 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  2 07:38:58 np0005466030 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  2 07:38:58 np0005466030 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  2 07:38:58 np0005466030 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  2 07:38:58 np0005466030 kernel: cfg80211: failed to load regulatory.db
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9211] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9234] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9748] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9750] audit: op="connection-add" uuid="b6560eb7-aa7a-46a8-9e22-a94e8e9b623b" name="br-ex-br" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9764] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9766] audit: op="connection-add" uuid="3762f612-f6fb-4636-9dc5-532fb7368103" name="br-ex-port" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9777] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9778] audit: op="connection-add" uuid="89fbbef3-d049-48ee-8abd-263cc30f2ff3" name="eth1-port" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9788] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9790] audit: op="connection-add" uuid="2159f9f4-9bc0-437b-9e3b-60177e3009fa" name="vlan20-port" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9799] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9801] audit: op="connection-add" uuid="45287b28-64e8-4216-86a4-7d34cc8270a0" name="vlan21-port" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9810] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9812] audit: op="connection-add" uuid="25d7d2cf-d530-46a5-a139-82bba86ce1d3" name="vlan22-port" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9821] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9823] audit: op="connection-add" uuid="e7fe5191-2592-4c84-8d54-6bb73b938742" name="vlan23-port" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9839] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=47731 uid=0 result="success"
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9852] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  2 07:38:59 np0005466030 NetworkManager[44960]: <info>  [1759405139.9854] audit: op="connection-add" uuid="d56d70f6-073e-4c01-9100-96c1bdf35220" name="br-ex-if" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0395] audit: op="connection-update" uuid="f0e350ee-ba0f-53f3-aecd-b1bd05c74472" name="ci-private-network" args="ovs-external-ids.data,ovs-interface.type,ipv4.routing-rules,ipv4.addresses,ipv4.routes,ipv4.never-default,ipv4.dns,ipv4.method,connection.controller,connection.master,connection.timestamp,connection.slave-type,connection.port-type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.dns,ipv6.method" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0418] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0420] audit: op="connection-add" uuid="d3d9e72f-d955-41c4-aa48-a68a5acce9bd" name="vlan20-if" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0443] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0444] audit: op="connection-add" uuid="2f1a5d35-8405-4392-b059-83d53872764f" name="vlan21-if" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0460] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0463] audit: op="connection-add" uuid="1988b30a-4113-479f-963f-36b51c4465d8" name="vlan22-if" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0480] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0482] audit: op="connection-add" uuid="aa0b45d7-8959-47ce-ab65-dc4ef312eca5" name="vlan23-if" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0493] audit: op="connection-delete" uuid="6e2af74a-3979-3958-b10b-5fcb5c84b968" name="Wired connection 1" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0504] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0514] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0518] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b6560eb7-aa7a-46a8-9e22-a94e8e9b623b)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0519] audit: op="connection-activate" uuid="b6560eb7-aa7a-46a8-9e22-a94e8e9b623b" name="br-ex-br" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0520] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0527] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0530] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (3762f612-f6fb-4636-9dc5-532fb7368103)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0532] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0536] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0540] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (89fbbef3-d049-48ee-8abd-263cc30f2ff3)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0542] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0548] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0551] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (2159f9f4-9bc0-437b-9e3b-60177e3009fa)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0553] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0560] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0565] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (45287b28-64e8-4216-86a4-7d34cc8270a0)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0567] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0573] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0577] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (25d7d2cf-d530-46a5-a139-82bba86ce1d3)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0579] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0586] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0590] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (e7fe5191-2592-4c84-8d54-6bb73b938742)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0591] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0593] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0595] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0601] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0605] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0609] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (d56d70f6-073e-4c01-9100-96c1bdf35220)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0610] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0614] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0616] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0619] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0621] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0635] device (eth1): disconnecting for new activation request.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0637] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0640] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0642] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0644] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0648] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0652] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0656] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (d3d9e72f-d955-41c4-aa48-a68a5acce9bd)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0657] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0660] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0663] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0664] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0667] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0670] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0673] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2f1a5d35-8405-4392-b059-83d53872764f)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0674] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0677] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0678] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0679] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0682] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0685] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0688] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1988b30a-4113-479f-963f-36b51c4465d8)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0689] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0693] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0695] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0696] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0698] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0701] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0705] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (aa0b45d7-8959-47ce-ab65-dc4ef312eca5)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0706] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0708] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0711] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0713] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0715] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0725] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0726] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0729] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0731] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0736] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0739] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0743] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0747] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0749] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 kernel: ovs-system: entered promiscuous mode
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0753] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0756] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0758] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0759] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0764] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0768] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0771] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0774] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 kernel: Timeout policy base is empty
Oct  2 07:39:00 np0005466030 systemd-udevd[47735]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0779] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0783] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0787] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0789] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0792] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0795] dhcp4 (eth0): canceled DHCP transaction
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0796] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0796] dhcp4 (eth0): state changed no lease
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0798] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  2 07:39:00 np0005466030 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0821] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0828] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47731 uid=0 result="fail" reason="Device is not activated"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0833] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0851] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0856] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0909] device (eth1): disconnecting for new activation request.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0910] audit: op="connection-activate" uuid="f0e350ee-ba0f-53f3-aecd-b1bd05c74472" name="ci-private-network" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0921] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.0929] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1051] device (eth1): Activation: starting connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1059] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1062] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1074] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1076] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1083] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1087] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1091] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1092] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1093] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1093] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1095] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1096] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1097] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1100] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1107] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1110] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1114] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1117] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1124] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1127] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1131] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1135] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1138] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1140] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1144] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1148] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1154] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1157] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 kernel: br-ex: entered promiscuous mode
Oct  2 07:39:00 np0005466030 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  2 07:39:00 np0005466030 kernel: vlan22: entered promiscuous mode
Oct  2 07:39:00 np0005466030 systemd-udevd[47734]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:39:00 np0005466030 kernel: vlan21: entered promiscuous mode
Oct  2 07:39:00 np0005466030 kernel: vlan23: entered promiscuous mode
Oct  2 07:39:00 np0005466030 systemd-udevd[47843]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:39:00 np0005466030 kernel: vlan20: entered promiscuous mode
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1673] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1681] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1688] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1694] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1700] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1701] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1716] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1722] device (eth1): Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1767] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1772] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1778] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1783] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1789] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1801] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1803] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1804] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1807] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1812] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1817] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1821] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1825] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1829] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1832] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1833] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1836] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1841] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1846] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  2 07:39:00 np0005466030 NetworkManager[44960]: <info>  [1759405140.1851] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  2 07:39:01 np0005466030 NetworkManager[44960]: <info>  [1759405141.4061] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct  2 07:39:01 np0005466030 NetworkManager[44960]: <info>  [1759405141.5281] checkpoint[0x55ce29684950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  2 07:39:01 np0005466030 NetworkManager[44960]: <info>  [1759405141.5283] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct  2 07:39:01 np0005466030 python3.9[48089]: ansible-ansible.legacy.async_status Invoked with jid=j387925174704.47725 mode=status _async_dir=/root/.ansible_async
Oct  2 07:39:01 np0005466030 NetworkManager[44960]: <info>  [1759405141.8053] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct  2 07:39:01 np0005466030 NetworkManager[44960]: <info>  [1759405141.8067] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct  2 07:39:02 np0005466030 NetworkManager[44960]: <info>  [1759405142.0647] audit: op="networking-control" arg="global-dns-configuration" pid=47731 uid=0 result="success"
Oct  2 07:39:02 np0005466030 NetworkManager[44960]: <info>  [1759405142.1202] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  2 07:39:02 np0005466030 NetworkManager[44960]: <info>  [1759405142.1742] audit: op="networking-control" arg="global-dns-configuration" pid=47731 uid=0 result="success"
Oct  2 07:39:02 np0005466030 NetworkManager[44960]: <info>  [1759405142.1776] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct  2 07:39:02 np0005466030 NetworkManager[44960]: <info>  [1759405142.3273] checkpoint[0x55ce29684a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  2 07:39:02 np0005466030 NetworkManager[44960]: <info>  [1759405142.3281] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct  2 07:39:02 np0005466030 ansible-async_wrapper.py[47729]: Module complete (47729)
Oct  2 07:39:02 np0005466030 ansible-async_wrapper.py[47728]: Done in kid B.
Oct  2 07:39:04 np0005466030 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 07:39:05 np0005466030 python3.9[48195]: ansible-ansible.legacy.async_status Invoked with jid=j387925174704.47725 mode=status _async_dir=/root/.ansible_async
Oct  2 07:39:05 np0005466030 python3.9[48297]: ansible-ansible.legacy.async_status Invoked with jid=j387925174704.47725 mode=cleanup _async_dir=/root/.ansible_async
Oct  2 07:39:06 np0005466030 python3.9[48449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:06 np0005466030 python3.9[48572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405145.9076734-932-107755002570456/.source.returncode _original_basename=.j6ypej5h follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:07 np0005466030 python3.9[48724]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:08 np0005466030 python3.9[48847]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405147.214209-980-114445389337510/.source.cfg _original_basename=.7ke4oh7p follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:09 np0005466030 python3.9[49000]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:39:09 np0005466030 systemd[1]: Reloading Network Manager...
Oct  2 07:39:09 np0005466030 NetworkManager[44960]: <info>  [1759405149.2131] audit: op="reload" arg="0" pid=49004 uid=0 result="success"
Oct  2 07:39:09 np0005466030 NetworkManager[44960]: <info>  [1759405149.2137] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  2 07:39:09 np0005466030 systemd[1]: Reloaded Network Manager.
Oct  2 07:39:09 np0005466030 systemd[1]: session-11.scope: Deactivated successfully.
Oct  2 07:39:09 np0005466030 systemd[1]: session-11.scope: Consumed 48.458s CPU time.
Oct  2 07:39:09 np0005466030 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Oct  2 07:39:09 np0005466030 systemd-logind[795]: Removed session 11.
Oct  2 07:39:14 np0005466030 systemd-logind[795]: New session 12 of user zuul.
Oct  2 07:39:14 np0005466030 systemd[1]: Started Session 12 of User zuul.
Oct  2 07:39:15 np0005466030 python3.9[49188]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:16 np0005466030 python3.9[49342]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:18 np0005466030 python3.9[49536]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:39:18 np0005466030 systemd[1]: session-12.scope: Deactivated successfully.
Oct  2 07:39:18 np0005466030 systemd[1]: session-12.scope: Consumed 2.188s CPU time.
Oct  2 07:39:18 np0005466030 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Oct  2 07:39:18 np0005466030 systemd-logind[795]: Removed session 12.
Oct  2 07:39:19 np0005466030 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 07:39:24 np0005466030 systemd-logind[795]: New session 13 of user zuul.
Oct  2 07:39:24 np0005466030 systemd[1]: Started Session 13 of User zuul.
Oct  2 07:39:25 np0005466030 python3.9[49718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:26 np0005466030 python3.9[49872]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:27 np0005466030 python3.9[50028]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:28 np0005466030 python3.9[50113]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:39:30 np0005466030 python3.9[50266]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:31 np0005466030 python3.9[50462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:32 np0005466030 python3.9[50614]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:39:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay-compat2047843413-merged.mount: Deactivated successfully.
Oct  2 07:39:32 np0005466030 podman[50615]: 2025-10-02 11:39:32.53956629 +0000 UTC m=+0.092734561 system refresh
Oct  2 07:39:33 np0005466030 python3.9[50776]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:39:33 np0005466030 python3.9[50899]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405172.7161007-203-198325121287530/.source.json follow=False _original_basename=podman_network_config.j2 checksum=4b19eedd85b966592ec31feebbc0c01623d2244e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:34 np0005466030 python3.9[51051]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:35 np0005466030 python3.9[51174]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405174.1822789-248-101404481579571/.source.conf follow=False _original_basename=registries.conf.j2 checksum=2f54462ce13fc7f0e9dc5b3970581b7761b51f34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:36 np0005466030 python3.9[51326]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:36 np0005466030 python3.9[51478]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:37 np0005466030 python3.9[51630]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:37 np0005466030 python3.9[51782]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:39:38 np0005466030 python3.9[51934]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:39:41 np0005466030 python3.9[52087]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:39:41 np0005466030 python3.9[52241]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:39:42 np0005466030 python3.9[52393]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:39:43 np0005466030 python3.9[52545]: ansible-service_facts Invoked
Oct  2 07:39:43 np0005466030 network[52562]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:39:43 np0005466030 network[52563]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:39:43 np0005466030 network[52564]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:39:48 np0005466030 python3.9[53018]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:39:50 np0005466030 python3.9[53171]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 07:39:52 np0005466030 python3.9[53323]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:52 np0005466030 python3.9[53448]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405191.6470292-644-10571626935825/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:53 np0005466030 python3.9[53602]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:39:54 np0005466030 python3.9[53727]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405193.0161536-690-205162396311375/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:55 np0005466030 python3.9[53881]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:39:57 np0005466030 python3.9[54035]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:39:58 np0005466030 python3.9[54119]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:39:59 np0005466030 python3.9[54273]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:40:00 np0005466030 python3.9[54357]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:40:00 np0005466030 chronyd[802]: chronyd exiting
Oct  2 07:40:00 np0005466030 systemd[1]: Stopping NTP client/server...
Oct  2 07:40:00 np0005466030 systemd[1]: chronyd.service: Deactivated successfully.
Oct  2 07:40:00 np0005466030 systemd[1]: Stopped NTP client/server.
Oct  2 07:40:00 np0005466030 systemd[1]: Starting NTP client/server...
Oct  2 07:40:00 np0005466030 chronyd[54365]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  2 07:40:00 np0005466030 chronyd[54365]: Frequency 9.122 +/- 0.334 ppm read from /var/lib/chrony/drift
Oct  2 07:40:00 np0005466030 chronyd[54365]: Loaded seccomp filter (level 2)
Oct  2 07:40:00 np0005466030 systemd[1]: Started NTP client/server.
Oct  2 07:40:00 np0005466030 systemd[1]: session-13.scope: Deactivated successfully.
Oct  2 07:40:00 np0005466030 systemd[1]: session-13.scope: Consumed 23.814s CPU time.
Oct  2 07:40:00 np0005466030 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Oct  2 07:40:00 np0005466030 systemd-logind[795]: Removed session 13.
Oct  2 07:40:06 np0005466030 systemd-logind[795]: New session 14 of user zuul.
Oct  2 07:40:06 np0005466030 systemd[1]: Started Session 14 of User zuul.
Oct  2 07:40:07 np0005466030 python3.9[54546]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:08 np0005466030 python3.9[54698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:08 np0005466030 python3.9[54821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405207.5603087-68-162952480946353/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:09 np0005466030 systemd[1]: session-14.scope: Deactivated successfully.
Oct  2 07:40:09 np0005466030 systemd[1]: session-14.scope: Consumed 1.531s CPU time.
Oct  2 07:40:09 np0005466030 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Oct  2 07:40:09 np0005466030 systemd-logind[795]: Removed session 14.
Oct  2 07:40:14 np0005466030 systemd-logind[795]: New session 15 of user zuul.
Oct  2 07:40:14 np0005466030 systemd[1]: Started Session 15 of User zuul.
Oct  2 07:40:15 np0005466030 python3.9[54999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:40:16 np0005466030 python3.9[55155]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:17 np0005466030 python3.9[55330]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:18 np0005466030 python3.9[55453]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759405216.9354713-89-157339913528518/.source.json _original_basename=.yqpy6ga0 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:19 np0005466030 python3.9[55605]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:19 np0005466030 python3.9[55728]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405218.796078-158-18709057615563/.source _original_basename=.b4_ws6dw follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:20 np0005466030 python3.9[55880]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:40:21 np0005466030 python3.9[56032]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:21 np0005466030 python3.9[56155]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405220.6652126-230-29662054966249/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:40:22 np0005466030 python3.9[56307]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:22 np0005466030 python3.9[56430]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405221.7876337-230-91997720174840/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:40:23 np0005466030 python3.9[56582]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:24 np0005466030 python3.9[56734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:24 np0005466030 python3.9[56857]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405223.6672528-341-68613096231987/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:25 np0005466030 python3.9[57009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:25 np0005466030 python3.9[57132]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405224.8593314-386-33140082130427/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:26 np0005466030 python3.9[57284]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:27 np0005466030 systemd[1]: Reloading.
Oct  2 07:40:27 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:27 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:27 np0005466030 systemd[1]: Reloading.
Oct  2 07:40:27 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:27 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:27 np0005466030 systemd[1]: Starting EDPM Container Shutdown...
Oct  2 07:40:27 np0005466030 systemd[1]: Finished EDPM Container Shutdown.
Oct  2 07:40:28 np0005466030 python3.9[57510]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:28 np0005466030 python3.9[57633]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405227.7337432-455-223414698389803/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:29 np0005466030 python3.9[57785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:30 np0005466030 python3.9[57908]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405229.0869932-500-197584828609392/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:30 np0005466030 python3.9[58060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:31 np0005466030 systemd[1]: Reloading.
Oct  2 07:40:31 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:31 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:31 np0005466030 systemd[1]: Reloading.
Oct  2 07:40:31 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:31 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:31 np0005466030 systemd[1]: Starting Create netns directory...
Oct  2 07:40:31 np0005466030 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:40:31 np0005466030 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:40:31 np0005466030 systemd[1]: Finished Create netns directory.
Oct  2 07:40:32 np0005466030 python3.9[58286]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:40:32 np0005466030 network[58303]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:40:32 np0005466030 network[58304]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:40:32 np0005466030 network[58305]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:40:36 np0005466030 python3.9[58569]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:36 np0005466030 systemd[1]: Reloading.
Oct  2 07:40:36 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:36 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:37 np0005466030 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  2 07:40:37 np0005466030 iptables.init[58608]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  2 07:40:37 np0005466030 iptables.init[58608]: iptables: Flushing firewall rules: [  OK  ]
Oct  2 07:40:37 np0005466030 systemd[1]: iptables.service: Deactivated successfully.
Oct  2 07:40:37 np0005466030 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  2 07:40:38 np0005466030 python3.9[58804]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:39 np0005466030 python3.9[58958]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:40:39 np0005466030 systemd[1]: Reloading.
Oct  2 07:40:39 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:40:39 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:40:39 np0005466030 systemd[1]: Starting Netfilter Tables...
Oct  2 07:40:39 np0005466030 systemd[1]: Finished Netfilter Tables.
Oct  2 07:40:40 np0005466030 python3.9[59149]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:40:41 np0005466030 python3.9[59302]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:40:41 np0005466030 python3.9[59427]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405240.7707264-707-173960495954506/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:40:42 np0005466030 python3.9[59578]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:41:08 np0005466030 systemd[1]: session-15.scope: Deactivated successfully.
Oct  2 07:41:08 np0005466030 systemd[1]: session-15.scope: Consumed 18.302s CPU time.
Oct  2 07:41:08 np0005466030 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Oct  2 07:41:08 np0005466030 systemd-logind[795]: Removed session 15.
Oct  2 07:41:21 np0005466030 systemd-logind[795]: New session 16 of user zuul.
Oct  2 07:41:21 np0005466030 systemd[1]: Started Session 16 of User zuul.
Oct  2 07:41:22 np0005466030 python3.9[59771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:41:23 np0005466030 python3.9[59927]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:24 np0005466030 python3.9[60102]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:24 np0005466030 python3.9[60180]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.0i2dyvq9 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:25 np0005466030 python3.9[60332]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:26 np0005466030 python3.9[60410]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.mlam78ay recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:26 np0005466030 python3.9[60562]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:28 np0005466030 python3.9[60714]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:29 np0005466030 python3.9[60792]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:30 np0005466030 python3.9[60944]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:30 np0005466030 python3.9[61022]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:41:31 np0005466030 python3.9[61174]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:31 np0005466030 python3.9[61326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:32 np0005466030 python3.9[61404]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:33 np0005466030 python3.9[61556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:33 np0005466030 python3.9[61634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:34 np0005466030 python3.9[61786]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:41:34 np0005466030 systemd[1]: Reloading.
Oct  2 07:41:34 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:41:34 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:41:35 np0005466030 python3.9[61975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:36 np0005466030 python3.9[62053]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:36 np0005466030 python3.9[62205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:37 np0005466030 python3.9[62283]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:38 np0005466030 python3.9[62435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:41:38 np0005466030 systemd[1]: Reloading.
Oct  2 07:41:38 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:41:38 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:41:38 np0005466030 systemd[1]: Starting Create netns directory...
Oct  2 07:41:38 np0005466030 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:41:38 np0005466030 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:41:38 np0005466030 systemd[1]: Finished Create netns directory.
Oct  2 07:41:39 np0005466030 python3.9[62626]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:41:39 np0005466030 network[62643]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:41:39 np0005466030 network[62644]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:41:39 np0005466030 network[62645]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:41:44 np0005466030 python3.9[62908]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:45 np0005466030 python3.9[62986]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:45 np0005466030 python3.9[63138]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:46 np0005466030 python3.9[63290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:47 np0005466030 python3.9[63413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405305.9864008-614-13313058275821/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:48 np0005466030 python3.9[63565]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:41:48 np0005466030 systemd[1]: Starting Time & Date Service...
Oct  2 07:41:48 np0005466030 systemd[1]: Started Time & Date Service.
Oct  2 07:41:49 np0005466030 python3.9[63721]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:49 np0005466030 python3.9[63873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:50 np0005466030 python3.9[63996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405309.3544161-719-46476962704567/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:51 np0005466030 python3.9[64148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:51 np0005466030 python3.9[64271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405310.9026983-765-219170727835183/.source.yaml _original_basename=.6t_2dcps follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:52 np0005466030 python3.9[64423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:53 np0005466030 python3.9[64546]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405312.1590478-809-226399252936899/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:54 np0005466030 python3.9[64698]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:41:54 np0005466030 python3.9[64851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:41:55 np0005466030 python3[65004]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:41:56 np0005466030 python3.9[65156]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:56 np0005466030 python3.9[65279]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405315.7851572-926-35313362132657/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:57 np0005466030 python3.9[65431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:58 np0005466030 python3.9[65554]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405317.0314128-971-196457072997456/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:58 np0005466030 python3.9[65706]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:41:59 np0005466030 python3.9[65829]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405318.2668617-1016-233817359502489/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:41:59 np0005466030 python3.9[65981]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:00 np0005466030 python3.9[66104]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405319.4759328-1061-147669845365241/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:01 np0005466030 python3.9[66256]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:42:01 np0005466030 python3.9[66379]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405320.7212932-1106-158706800665332/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:02 np0005466030 python3.9[66531]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:03 np0005466030 python3.9[66683]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:04 np0005466030 python3.9[66842]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:05 np0005466030 python3.9[66995]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:05 np0005466030 python3.9[67147]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:06 np0005466030 python3.9[67299]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:42:07 np0005466030 python3.9[67452]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:42:07 np0005466030 systemd[1]: session-16.scope: Deactivated successfully.
Oct  2 07:42:07 np0005466030 systemd[1]: session-16.scope: Consumed 29.641s CPU time.
Oct  2 07:42:07 np0005466030 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Oct  2 07:42:07 np0005466030 systemd-logind[795]: Removed session 16.
Oct  2 07:42:09 np0005466030 chronyd[54365]: Selected source 216.128.178.20 (pool.ntp.org)
Oct  2 07:42:12 np0005466030 systemd-logind[795]: New session 17 of user zuul.
Oct  2 07:42:12 np0005466030 systemd[1]: Started Session 17 of User zuul.
Oct  2 07:42:13 np0005466030 python3.9[67633]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  2 07:42:14 np0005466030 python3.9[67785]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:15 np0005466030 python3.9[67937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:16 np0005466030 python3.9[68089]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfikJfuUE7Xs2lF9Qh9l0WUdl+Tct7ff0gJQZVpPwLHlAwFnY1lIlqF2IQ3J7LtFcsjYF5RcofKcj+ARkMTobXFoygI/H3Yl5EGDehZbaNONLkDXT20bcYtosTZBjJTZWMJaDGUobRPnKWEbt7P8G/CVwj+LKBYxYcl65Bs0m8Ii2JZObV/41E/44oNBbTT6VnLqrH1BjRfNgToFyoYZToIU6gJw+lDGgt/afrHnDeR8fo6fgHkoHZKHxctrFraqhPOEX+SW/RD5ra4/WxZTBDAcOelVyZhpZ0V6HTQuS0IuD/sy9RD9W59TrF0oFH8kP6H1F3EbhrMfM/wkGJqxcBEMPIlGjUgoOCOY4tgCsAuyKcqelTUJIoL5uTuk06fd+1+B0t8j//vY7eWDCGwHAYrOCbL954GsjqhEOd/SL8vW6cT4Eh+DaWzKpvnl+bEN+G7wkI9etJ4B8NugtDyE25Ikfn9nsBLIcPcuepnlcBQkTN4sC+w0I1AEm3Uo8MFOM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxo/cGygmGP55Hjd3RI5yFpLqrtrtdd2PGw/FbMnxJJ#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbUwjRfNWPOWmPM9kXykw3bNz7sYSt7DYbalJhzh+E3yGMACUO+HxFuSQ4lHBBXquZltdOcmR202cRP+4s05oI=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb8D90laelhslbtmfz72Mp6Q7iCMu+KiPRuBFH59nBtb1LmjrIFjvU1qZnJ+wipHW+bRcdDzNWNM8KJ4IImBqFxbrg17RhHeunE84nnR8leX3OYiMZumpygvXYCykppXcKbe6pfxYUtyTc8Tz3bNoayi7uGoKgN/iaUeADLuyJUDDVyusj2q7uIj7gZ6PbtorR5cUUn0wBZTo3Jx84NmdiJr/xDGrtfawsV6ATz+Rpx3vzz4EE4dq4wN3eTUJiPCpc4jbTvHpp0GdJTK1BkZ4IANgw3a+loOO2MHq2JgMRjKJrH7sqrw7s9XgzHSh/ufOmEKAtgw75tWExEcy/05QGGbR2jnIKde4vVIS5JheT1z4gYASjKEEidjisDxig5nigPddxe3nSxKRQczKXPV+KUOB14AljRbnyqgbw4Dv9wtnkFL/QLMXFA0/NaOAZxhI+fOoAcg+No2ZsB95IgQ49ay/LN011x9o1vfwVPfReOtkjpVxQB8oCXhA53BfrG3M=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAtzqd+HKKUdtdjsFK/O61rbaIfH2/ANnbsFBvd1WLXA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOyw0g2rIQxTWmEkqBGUUvYwuDopCg/ppyBGUh5LatbQKlwO7AkEzPUhEeFZv2/qzobLbOH4kVCTAQVjiQm//WM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+zJSXp4XBwGccVvswqz0/27MxV0mWhHJ9EKngmPOQ2Et2f+QArNFJsEaUEJankaYSrISVt8m0QscyZhZUgrxp07g0OV9pVQ2pkqF/CSC7RnN96odOHOeQjRmSOj9vF8Q3EeyRZ7MS1CWH6TT+jYOD77TFol6cQhi7o5bzgAdL6yB/ili/PG3bBxtbYtNwSqCSpiGaN8z8j/REszkW2GM6wvDGXk9NgNfBZT4goP4O3qz/wVeMM/OQFGQa/34tMNX3QEE/XOdAUIRXXLw0vmVj7oRDzGVMc12TDalGOqphS+LkUS4PB+ns/IaplTUzc8zlwhycQQPxnzEcm+z3QP8Bo+iBGw+aKpc5UTMMtZocXrjHCv0Q6irXug6N6b7aaANiHMmveZua/Gjp6Ef//Q/+thKtkvcvvhUDZknHLDrHGT5QbVQYjN23MyFdWCu6MgpBw8NNyeI5sO605lOrxk2oXwX19ah7Qt7iAU7KRijLzQBjnMjNb6bcSOCFXVzpl0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxmfzZIbNhcux/tJpdvzaDW/iX/PRMqNcEGpeyKOTEV#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANBfiBul8lZFa5T9kjEYk719DZo4CtW2bTDn+SPcbu/2U71Ms3Qc1tvqiM9B/ciT9t/uzxk25klpGuFqieJFkk=#012 create=True mode=0644 path=/tmp/ansible.3qr617h_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:16 np0005466030 python3.9[68241]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3qr617h_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:17 np0005466030 python3.9[68395]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3qr617h_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:18 np0005466030 systemd[1]: session-17.scope: Deactivated successfully.
Oct  2 07:42:18 np0005466030 systemd[1]: session-17.scope: Consumed 3.391s CPU time.
Oct  2 07:42:18 np0005466030 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Oct  2 07:42:18 np0005466030 systemd-logind[795]: Removed session 17.
Oct  2 07:42:18 np0005466030 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:42:23 np0005466030 systemd-logind[795]: New session 18 of user zuul.
Oct  2 07:42:23 np0005466030 systemd[1]: Started Session 18 of User zuul.
Oct  2 07:42:24 np0005466030 python3.9[68576]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:25 np0005466030 python3.9[68732]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 07:42:26 np0005466030 python3.9[68886]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:42:27 np0005466030 python3.9[69039]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:28 np0005466030 python3.9[69192]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:28 np0005466030 python3.9[69346]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:29 np0005466030 python3.9[69501]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:42:30 np0005466030 systemd[1]: session-18.scope: Deactivated successfully.
Oct  2 07:42:30 np0005466030 systemd[1]: session-18.scope: Consumed 4.130s CPU time.
Oct  2 07:42:30 np0005466030 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Oct  2 07:42:30 np0005466030 systemd-logind[795]: Removed session 18.
Oct  2 07:42:35 np0005466030 systemd-logind[795]: New session 19 of user zuul.
Oct  2 07:42:35 np0005466030 systemd[1]: Started Session 19 of User zuul.
Oct  2 07:42:36 np0005466030 python3.9[69679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:37 np0005466030 python3.9[69835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:42:38 np0005466030 python3.9[69919]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:42:40 np0005466030 python3.9[70070]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:42:42 np0005466030 python3.9[70221]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:42:42 np0005466030 python3.9[70371]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:42 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:42:42 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:42:43 np0005466030 python3.9[70522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:42:43 np0005466030 systemd[1]: session-19.scope: Deactivated successfully.
Oct  2 07:42:43 np0005466030 systemd[1]: session-19.scope: Consumed 5.555s CPU time.
Oct  2 07:42:43 np0005466030 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Oct  2 07:42:43 np0005466030 systemd-logind[795]: Removed session 19.
Oct  2 07:42:51 np0005466030 systemd-logind[795]: New session 20 of user zuul.
Oct  2 07:42:51 np0005466030 systemd[1]: Started Session 20 of User zuul.
Oct  2 07:42:57 np0005466030 python3[71288]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:42:59 np0005466030 python3[71383]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  2 07:43:00 np0005466030 python3[71410]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:43:01 np0005466030 python3[71436]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:43:01 np0005466030 kernel: loop: module loaded
Oct  2 07:43:01 np0005466030 kernel: loop3: detected capacity change from 0 to 14680064
Oct  2 07:43:01 np0005466030 python3[71471]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:43:01 np0005466030 lvm[71474]: PV /dev/loop3 not used.
Oct  2 07:43:01 np0005466030 lvm[71483]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:43:01 np0005466030 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  2 07:43:01 np0005466030 lvm[71485]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  2 07:43:01 np0005466030 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  2 07:43:02 np0005466030 python3[71563]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  2 07:43:02 np0005466030 python3[71636]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405382.253875-33446-278457987895800/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:43:03 np0005466030 python3[71686]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:43:03 np0005466030 systemd[1]: Reloading.
Oct  2 07:43:03 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:43:03 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:43:03 np0005466030 systemd[1]: Starting Ceph OSD losetup...
Oct  2 07:43:03 np0005466030 bash[71727]: /dev/loop3: [64513]:4349018 (/var/lib/ceph-osd-0.img)
Oct  2 07:43:03 np0005466030 systemd[1]: Finished Ceph OSD losetup.
Oct  2 07:43:04 np0005466030 lvm[71728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:43:04 np0005466030 lvm[71728]: VG ceph_vg0 finished
Oct  2 07:43:06 np0005466030 python3[71752]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:43:52 np0005466030 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 07:44:57 np0005466030 systemd[1]: Created slice User Slice of UID 42477.
Oct  2 07:44:57 np0005466030 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  2 07:44:57 np0005466030 systemd-logind[795]: New session 21 of user ceph-admin.
Oct  2 07:44:57 np0005466030 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  2 07:44:57 np0005466030 systemd[1]: Starting User Manager for UID 42477...
Oct  2 07:44:57 np0005466030 systemd[71803]: Queued start job for default target Main User Target.
Oct  2 07:44:57 np0005466030 systemd[71803]: Created slice User Application Slice.
Oct  2 07:44:57 np0005466030 systemd[71803]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 07:44:57 np0005466030 systemd[71803]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:44:57 np0005466030 systemd[71803]: Reached target Paths.
Oct  2 07:44:57 np0005466030 systemd[71803]: Reached target Timers.
Oct  2 07:44:57 np0005466030 systemd[71803]: Starting D-Bus User Message Bus Socket...
Oct  2 07:44:57 np0005466030 systemd[71803]: Starting Create User's Volatile Files and Directories...
Oct  2 07:44:57 np0005466030 systemd[71803]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:44:57 np0005466030 systemd[71803]: Reached target Sockets.
Oct  2 07:44:57 np0005466030 systemd-logind[795]: New session 23 of user ceph-admin.
Oct  2 07:44:57 np0005466030 systemd[71803]: Finished Create User's Volatile Files and Directories.
Oct  2 07:44:57 np0005466030 systemd[71803]: Reached target Basic System.
Oct  2 07:44:57 np0005466030 systemd[71803]: Reached target Main User Target.
Oct  2 07:44:57 np0005466030 systemd[71803]: Startup finished in 122ms.
Oct  2 07:44:57 np0005466030 systemd[1]: Started User Manager for UID 42477.
Oct  2 07:44:57 np0005466030 systemd[1]: Started Session 21 of User ceph-admin.
Oct  2 07:44:57 np0005466030 systemd[1]: Started Session 23 of User ceph-admin.
Oct  2 07:44:57 np0005466030 systemd-logind[795]: New session 24 of user ceph-admin.
Oct  2 07:44:57 np0005466030 systemd[1]: Started Session 24 of User ceph-admin.
Oct  2 07:44:58 np0005466030 systemd-logind[795]: New session 25 of user ceph-admin.
Oct  2 07:44:58 np0005466030 systemd[1]: Started Session 25 of User ceph-admin.
Oct  2 07:44:58 np0005466030 systemd-logind[795]: New session 26 of user ceph-admin.
Oct  2 07:44:58 np0005466030 systemd[1]: Started Session 26 of User ceph-admin.
Oct  2 07:44:58 np0005466030 systemd-logind[795]: New session 27 of user ceph-admin.
Oct  2 07:44:58 np0005466030 systemd[1]: Started Session 27 of User ceph-admin.
Oct  2 07:44:59 np0005466030 systemd-logind[795]: New session 28 of user ceph-admin.
Oct  2 07:44:59 np0005466030 systemd[1]: Started Session 28 of User ceph-admin.
Oct  2 07:44:59 np0005466030 systemd-logind[795]: New session 29 of user ceph-admin.
Oct  2 07:44:59 np0005466030 systemd[1]: Started Session 29 of User ceph-admin.
Oct  2 07:44:59 np0005466030 systemd-logind[795]: New session 30 of user ceph-admin.
Oct  2 07:44:59 np0005466030 systemd[1]: Started Session 30 of User ceph-admin.
Oct  2 07:45:00 np0005466030 systemd-logind[795]: New session 31 of user ceph-admin.
Oct  2 07:45:00 np0005466030 systemd[1]: Started Session 31 of User ceph-admin.
Oct  2 07:45:00 np0005466030 systemd-logind[795]: New session 32 of user ceph-admin.
Oct  2 07:45:00 np0005466030 systemd[1]: Started Session 32 of User ceph-admin.
Oct  2 07:45:01 np0005466030 systemd-logind[795]: New session 33 of user ceph-admin.
Oct  2 07:45:01 np0005466030 systemd[1]: Started Session 33 of User ceph-admin.
Oct  2 07:45:01 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:01 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:02 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:02 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:02 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:02 np0005466030 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 72772 (sysctl)
Oct  2 07:45:02 np0005466030 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  2 07:45:02 np0005466030 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  2 07:45:03 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:03 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:06 np0005466030 systemd[1]: var-lib-containers-storage-overlay-compat3952850408-lower\x2dmapped.mount: Deactivated successfully.
Oct  2 07:45:17 np0005466030 podman[73049]: 2025-10-02 11:45:17.307471463 +0000 UTC m=+13.294168593 container create 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 07:45:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1968279995-merged.mount: Deactivated successfully.
Oct  2 07:45:17 np0005466030 podman[73049]: 2025-10-02 11:45:17.287032555 +0000 UTC m=+13.273729695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:17 np0005466030 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  2 07:45:17 np0005466030 systemd[1]: Started libpod-conmon-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope.
Oct  2 07:45:17 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:17 np0005466030 podman[73049]: 2025-10-02 11:45:17.39767453 +0000 UTC m=+13.384371680 container init 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:45:17 np0005466030 podman[73049]: 2025-10-02 11:45:17.405054471 +0000 UTC m=+13.391751601 container start 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 07:45:17 np0005466030 podman[73049]: 2025-10-02 11:45:17.408710696 +0000 UTC m=+13.395407856 container attach 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:17 np0005466030 focused_noether[73112]: 167 167
Oct  2 07:45:17 np0005466030 systemd[1]: libpod-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope: Deactivated successfully.
Oct  2 07:45:17 np0005466030 conmon[73112]: conmon 285edd22b6298345db44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope/container/memory.events
Oct  2 07:45:17 np0005466030 podman[73049]: 2025-10-02 11:45:17.413758903 +0000 UTC m=+13.400456043 container died 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 07:45:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay-08bc5421ffcd701119682e76aea2a50ef0664bdbb2706dcb2e32ef694ab4d421-merged.mount: Deactivated successfully.
Oct  2 07:45:17 np0005466030 podman[73049]: 2025-10-02 11:45:17.458138189 +0000 UTC m=+13.444835319 container remove 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 07:45:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:17 np0005466030 systemd[1]: libpod-conmon-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope: Deactivated successfully.
Oct  2 07:45:17 np0005466030 podman[73135]: 2025-10-02 11:45:17.640635381 +0000 UTC m=+0.050130268 container create c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:17 np0005466030 systemd[1]: Started libpod-conmon-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope.
Oct  2 07:45:17 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:17 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e017226a99ed4066cdbd2954b2092646828811192a7888b4f874a4b599ca4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:17 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e017226a99ed4066cdbd2954b2092646828811192a7888b4f874a4b599ca4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:17 np0005466030 podman[73135]: 2025-10-02 11:45:17.713479645 +0000 UTC m=+0.122974542 container init c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:17 np0005466030 podman[73135]: 2025-10-02 11:45:17.620624055 +0000 UTC m=+0.030118972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:17 np0005466030 podman[73135]: 2025-10-02 11:45:17.719165423 +0000 UTC m=+0.128660310 container start c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:17 np0005466030 podman[73135]: 2025-10-02 11:45:17.722533239 +0000 UTC m=+0.132028126 container attach c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]: [
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:    {
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "available": false,
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "ceph_device": false,
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "lsm_data": {},
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "lvs": [],
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "path": "/dev/sr0",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "rejected_reasons": [
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "Insufficient space (<5GB)",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "Has a FileSystem"
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        ],
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        "sys_api": {
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "actuators": null,
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "device_nodes": "sr0",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "devname": "sr0",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "human_readable_size": "482.00 KB",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "id_bus": "ata",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "model": "QEMU DVD-ROM",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "nr_requests": "2",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "parent": "/dev/sr0",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "partitions": {},
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "path": "/dev/sr0",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "removable": "1",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "rev": "2.5+",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "ro": "0",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "rotational": "0",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "sas_address": "",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "sas_device_handle": "",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "scheduler_mode": "mq-deadline",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "sectors": 0,
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "sectorsize": "2048",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "size": 493568.0,
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "support_discard": "2048",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "type": "disk",
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:            "vendor": "QEMU"
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:        }
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]:    }
Oct  2 07:45:18 np0005466030 nostalgic_maxwell[73151]: ]
Oct  2 07:45:18 np0005466030 systemd[1]: libpod-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope: Deactivated successfully.
Oct  2 07:45:18 np0005466030 podman[73135]: 2025-10-02 11:45:18.790396855 +0000 UTC m=+1.199891752 container died c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:45:18 np0005466030 systemd[1]: libpod-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope: Consumed 1.054s CPU time.
Oct  2 07:45:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay-a3e017226a99ed4066cdbd2954b2092646828811192a7888b4f874a4b599ca4a-merged.mount: Deactivated successfully.
Oct  2 07:45:18 np0005466030 podman[73135]: 2025-10-02 11:45:18.842507003 +0000 UTC m=+1.252001890 container remove c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:45:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:18 np0005466030 systemd[1]: libpod-conmon-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope: Deactivated successfully.
Oct  2 07:45:23 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:23 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:23 np0005466030 podman[75978]: 2025-10-02 11:45:23.224852273 +0000 UTC m=+0.021176783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:23 np0005466030 podman[75978]: 2025-10-02 11:45:23.355218035 +0000 UTC m=+0.151542525 container create d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:45:23 np0005466030 systemd[1]: Started libpod-conmon-d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588.scope.
Oct  2 07:45:23 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:23 np0005466030 podman[75978]: 2025-10-02 11:45:23.502532476 +0000 UTC m=+0.298856996 container init d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  2 07:45:23 np0005466030 podman[75978]: 2025-10-02 11:45:23.510394653 +0000 UTC m=+0.306719143 container start d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  2 07:45:23 np0005466030 silly_tharp[75994]: 167 167
Oct  2 07:45:23 np0005466030 systemd[1]: libpod-d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588.scope: Deactivated successfully.
Oct  2 07:45:23 np0005466030 podman[75978]: 2025-10-02 11:45:23.522073857 +0000 UTC m=+0.318398337 container attach d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:23 np0005466030 podman[75978]: 2025-10-02 11:45:23.522901773 +0000 UTC m=+0.319226263 container died d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:45:23 np0005466030 podman[75978]: 2025-10-02 11:45:23.775123391 +0000 UTC m=+0.571447891 container remove d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:45:23 np0005466030 systemd[1]: libpod-conmon-d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588.scope: Deactivated successfully.
Oct  2 07:45:23 np0005466030 systemd[1]: Reloading.
Oct  2 07:45:24 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:24 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:24 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:24 np0005466030 systemd[1]: Reloading.
Oct  2 07:45:24 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:24 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:24 np0005466030 systemd[1]: Reached target All Ceph clusters and services.
Oct  2 07:45:24 np0005466030 systemd[1]: Reloading.
Oct  2 07:45:24 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:24 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:24 np0005466030 systemd[1]: Reached target Ceph cluster 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:45:24 np0005466030 systemd[1]: Reloading.
Oct  2 07:45:24 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:24 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:24 np0005466030 systemd[1]: Reloading.
Oct  2 07:45:25 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:25 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:25 np0005466030 systemd[1]: Created slice Slice /system/ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:45:25 np0005466030 systemd[1]: Reached target System Time Set.
Oct  2 07:45:25 np0005466030 systemd[1]: Reached target System Time Synchronized.
Oct  2 07:45:25 np0005466030 systemd[1]: Starting Ceph crash.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:45:25 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:25 np0005466030 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  2 07:45:25 np0005466030 podman[76248]: 2025-10-02 11:45:25.465871593 +0000 UTC m=+0.026788248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:25 np0005466030 podman[76248]: 2025-10-02 11:45:25.793023939 +0000 UTC m=+0.353940614 container create f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  2 07:45:25 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d1438bf3de6c240f49b3f8d950266bc2a30e683f3cbad5f0fed80c4238e08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:25 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d1438bf3de6c240f49b3f8d950266bc2a30e683f3cbad5f0fed80c4238e08/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:25 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d1438bf3de6c240f49b3f8d950266bc2a30e683f3cbad5f0fed80c4238e08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:25 np0005466030 podman[76248]: 2025-10-02 11:45:25.870029599 +0000 UTC m=+0.430946284 container init f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  2 07:45:25 np0005466030 podman[76248]: 2025-10-02 11:45:25.875321901 +0000 UTC m=+0.436238556 container start f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:25 np0005466030 bash[76248]: f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13
Oct  2 07:45:25 np0005466030 systemd[1]: Started Ceph crash.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.267+0000 7f6ed9021640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.267+0000 7f6ed9021640 -1 AuthRegistry(0x7f6ed4067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.268+0000 7f6ed9021640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.268+0000 7f6ed9021640 -1 AuthRegistry(0x7f6ed9020000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.271+0000 7f6ed2d76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.271+0000 7f6ed9021640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  2 07:45:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  2 07:45:26 np0005466030 podman[76420]: 2025-10-02 11:45:26.45588733 +0000 UTC m=+0.020511817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:26 np0005466030 podman[76420]: 2025-10-02 11:45:26.555037066 +0000 UTC m=+0.119661523 container create 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:26 np0005466030 systemd[1]: Started libpod-conmon-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope.
Oct  2 07:45:26 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:26 np0005466030 podman[76420]: 2025-10-02 11:45:26.672962525 +0000 UTC m=+0.237587002 container init 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 07:45:26 np0005466030 podman[76420]: 2025-10-02 11:45:26.680611579 +0000 UTC m=+0.245236036 container start 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:45:26 np0005466030 eloquent_merkle[76437]: 167 167
Oct  2 07:45:26 np0005466030 systemd[1]: libpod-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope: Deactivated successfully.
Oct  2 07:45:26 np0005466030 conmon[76437]: conmon 8e546f436cb90b08eea5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope/container/memory.events
Oct  2 07:45:26 np0005466030 podman[76420]: 2025-10-02 11:45:26.693470381 +0000 UTC m=+0.258094838 container attach 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:26 np0005466030 podman[76420]: 2025-10-02 11:45:26.694750261 +0000 UTC m=+0.259374718 container died 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:26 np0005466030 systemd[1]: var-lib-containers-storage-overlay-920c3e546ac272887279f259713cf828e3e402e51da866ad1f3b0b68923470b6-merged.mount: Deactivated successfully.
Oct  2 07:45:26 np0005466030 podman[76420]: 2025-10-02 11:45:26.759169066 +0000 UTC m=+0.323793523 container remove 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:26 np0005466030 systemd[1]: libpod-conmon-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope: Deactivated successfully.
Oct  2 07:45:26 np0005466030 podman[76461]: 2025-10-02 11:45:26.898350064 +0000 UTC m=+0.039678822 container create 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 07:45:26 np0005466030 systemd[1]: Started libpod-conmon-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope.
Oct  2 07:45:26 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:26 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:26 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:26 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:26 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:26 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:26 np0005466030 podman[76461]: 2025-10-02 11:45:26.973458616 +0000 UTC m=+0.114787394 container init 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 07:45:26 np0005466030 podman[76461]: 2025-10-02 11:45:26.879428187 +0000 UTC m=+0.020756975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:26 np0005466030 podman[76461]: 2025-10-02 11:45:26.983603306 +0000 UTC m=+0.124932064 container start 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  2 07:45:26 np0005466030 podman[76461]: 2025-10-02 11:45:26.986964169 +0000 UTC m=+0.128292927 container attach 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  2 07:45:27 np0005466030 dreamy_nightingale[76477]: --> passed data devices: 0 physical, 1 LVM
Oct  2 07:45:27 np0005466030 dreamy_nightingale[76477]: --> relative data size: 1.0
Oct  2 07:45:27 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 07:45:27 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6e4de194-9f54-490b-9be5-cb1e4c11649b
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  2 07:45:28 np0005466030 lvm[76525]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 07:45:28 np0005466030 lvm[76525]: VG ceph_vg0 finished
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: stderr: got monmap epoch 1
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: --> Creating keyring file for osd.0
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  2 07:45:28 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6e4de194-9f54-490b-9be5-cb1e4c11649b --setuser ceph --setgroup ceph
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  2 07:45:31 np0005466030 dreamy_nightingale[76477]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  2 07:45:31 np0005466030 systemd[1]: libpod-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope: Deactivated successfully.
Oct  2 07:45:31 np0005466030 systemd[1]: libpod-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope: Consumed 2.411s CPU time.
Oct  2 07:45:31 np0005466030 conmon[76477]: conmon 33415ab6b7e7f3a3e038 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope/container/memory.events
Oct  2 07:45:31 np0005466030 podman[77444]: 2025-10-02 11:45:31.57368317 +0000 UTC m=+0.029441959 container died 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  2 07:45:31 np0005466030 systemd[1]: var-lib-containers-storage-overlay-bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f-merged.mount: Deactivated successfully.
Oct  2 07:45:32 np0005466030 podman[77444]: 2025-10-02 11:45:32.036121284 +0000 UTC m=+0.491880023 container remove 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:45:32 np0005466030 systemd[1]: libpod-conmon-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope: Deactivated successfully.
Oct  2 07:45:32 np0005466030 podman[77599]: 2025-10-02 11:45:32.620659754 +0000 UTC m=+0.066705056 container create fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 07:45:32 np0005466030 podman[77599]: 2025-10-02 11:45:32.579417716 +0000 UTC m=+0.025463108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:32 np0005466030 systemd[1]: Started libpod-conmon-fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101.scope.
Oct  2 07:45:32 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:32 np0005466030 podman[77599]: 2025-10-02 11:45:32.745373951 +0000 UTC m=+0.191419283 container init fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  2 07:45:32 np0005466030 podman[77599]: 2025-10-02 11:45:32.753344784 +0000 UTC m=+0.199390096 container start fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 07:45:32 np0005466030 gallant_gagarin[77616]: 167 167
Oct  2 07:45:32 np0005466030 podman[77599]: 2025-10-02 11:45:32.757223562 +0000 UTC m=+0.203268894 container attach fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:32 np0005466030 systemd[1]: libpod-fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101.scope: Deactivated successfully.
Oct  2 07:45:32 np0005466030 podman[77599]: 2025-10-02 11:45:32.759709559 +0000 UTC m=+0.205754871 container died fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:45:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay-d150a0381db1a2c5ad16dd209bc4dbb21a1e4783e6a05e5b0a5c0bc0aa43c120-merged.mount: Deactivated successfully.
Oct  2 07:45:32 np0005466030 podman[77599]: 2025-10-02 11:45:32.798256305 +0000 UTC m=+0.244301607 container remove fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 07:45:32 np0005466030 systemd[1]: libpod-conmon-fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101.scope: Deactivated successfully.
Oct  2 07:45:32 np0005466030 podman[77640]: 2025-10-02 11:45:32.943412915 +0000 UTC m=+0.038949279 container create a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:45:32 np0005466030 systemd[1]: Started libpod-conmon-a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3.scope.
Oct  2 07:45:33 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:33 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:33 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:33 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:33 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:33 np0005466030 podman[77640]: 2025-10-02 11:45:32.925031515 +0000 UTC m=+0.020567889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:33 np0005466030 podman[77640]: 2025-10-02 11:45:33.030411881 +0000 UTC m=+0.125948255 container init a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  2 07:45:33 np0005466030 podman[77640]: 2025-10-02 11:45:33.038662272 +0000 UTC m=+0.134198626 container start a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:33 np0005466030 podman[77640]: 2025-10-02 11:45:33.04282887 +0000 UTC m=+0.138365224 container attach a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]: {
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:    "0": [
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:        {
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "devices": [
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "/dev/loop3"
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            ],
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "lv_name": "ceph_lv0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "lv_size": "7511998464",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NOlLw0-B6eL-n4qO-Nw8l-35dX-ZVQD-2catnQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6e4de194-9f54-490b-9be5-cb1e4c11649b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "lv_uuid": "NOlLw0-B6eL-n4qO-Nw8l-35dX-ZVQD-2catnQ",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "name": "ceph_lv0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "tags": {
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.block_uuid": "NOlLw0-B6eL-n4qO-Nw8l-35dX-ZVQD-2catnQ",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.cephx_lockbox_secret": "",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.cluster_name": "ceph",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.crush_device_class": "",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.encrypted": "0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.osd_fsid": "6e4de194-9f54-490b-9be5-cb1e4c11649b",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.osd_id": "0",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.type": "block",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:                "ceph.vdo": "0"
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            },
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "type": "block",
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:            "vg_name": "ceph_vg0"
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:        }
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]:    ]
Oct  2 07:45:33 np0005466030 relaxed_shirley[77657]: }
Oct  2 07:45:33 np0005466030 systemd[1]: libpod-a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3.scope: Deactivated successfully.
Oct  2 07:45:33 np0005466030 podman[77640]: 2025-10-02 11:45:33.905626873 +0000 UTC m=+1.001163227 container died a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 07:45:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay-374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be-merged.mount: Deactivated successfully.
Oct  2 07:45:33 np0005466030 podman[77640]: 2025-10-02 11:45:33.984700696 +0000 UTC m=+1.080237050 container remove a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:33 np0005466030 systemd[1]: libpod-conmon-a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3.scope: Deactivated successfully.
Oct  2 07:45:34 np0005466030 podman[77821]: 2025-10-02 11:45:34.656082627 +0000 UTC m=+0.037121043 container create b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:34 np0005466030 systemd[1]: Started libpod-conmon-b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1.scope.
Oct  2 07:45:34 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:34 np0005466030 podman[77821]: 2025-10-02 11:45:34.733998136 +0000 UTC m=+0.115036592 container init b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  2 07:45:34 np0005466030 podman[77821]: 2025-10-02 11:45:34.640524482 +0000 UTC m=+0.021562938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:34 np0005466030 podman[77821]: 2025-10-02 11:45:34.740417402 +0000 UTC m=+0.121455828 container start b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 07:45:34 np0005466030 agitated_neumann[77837]: 167 167
Oct  2 07:45:34 np0005466030 systemd[1]: libpod-b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1.scope: Deactivated successfully.
Oct  2 07:45:34 np0005466030 podman[77821]: 2025-10-02 11:45:34.745887358 +0000 UTC m=+0.126925804 container attach b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:34 np0005466030 podman[77821]: 2025-10-02 11:45:34.746903259 +0000 UTC m=+0.127941705 container died b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:45:34 np0005466030 systemd[1]: var-lib-containers-storage-overlay-a3a500e9f629486f9ced20a87e07ee8d1dd82f45325ffabdcbf1eca82f5ffc95-merged.mount: Deactivated successfully.
Oct  2 07:45:34 np0005466030 podman[77821]: 2025-10-02 11:45:34.787922982 +0000 UTC m=+0.168961398 container remove b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:45:34 np0005466030 systemd[1]: libpod-conmon-b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1.scope: Deactivated successfully.
Oct  2 07:45:35 np0005466030 podman[77869]: 2025-10-02 11:45:35.041357277 +0000 UTC m=+0.043204070 container create feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:45:35 np0005466030 systemd[1]: Started libpod-conmon-feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5.scope.
Oct  2 07:45:35 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:35 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:35 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:35 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:35 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:35 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:35 np0005466030 podman[77869]: 2025-10-02 11:45:35.019332954 +0000 UTC m=+0.021179787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:35 np0005466030 podman[77869]: 2025-10-02 11:45:35.123608027 +0000 UTC m=+0.125454840 container init feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:45:35 np0005466030 podman[77869]: 2025-10-02 11:45:35.131934991 +0000 UTC m=+0.133781784 container start feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:35 np0005466030 podman[77869]: 2025-10-02 11:45:35.137502511 +0000 UTC m=+0.139349324 container attach feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:45:35 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test[77885]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  2 07:45:35 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test[77885]:                            [--no-systemd] [--no-tmpfs]
Oct  2 07:45:35 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test[77885]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  2 07:45:35 np0005466030 systemd[1]: libpod-feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5.scope: Deactivated successfully.
Oct  2 07:45:35 np0005466030 podman[77869]: 2025-10-02 11:45:35.782560729 +0000 UTC m=+0.784407532 container died feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  2 07:45:35 np0005466030 systemd[1]: var-lib-containers-storage-overlay-e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c-merged.mount: Deactivated successfully.
Oct  2 07:45:35 np0005466030 podman[77869]: 2025-10-02 11:45:35.848047898 +0000 UTC m=+0.849894691 container remove feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  2 07:45:35 np0005466030 systemd[1]: libpod-conmon-feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5.scope: Deactivated successfully.
Oct  2 07:45:36 np0005466030 systemd[1]: Reloading.
Oct  2 07:45:36 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:36 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:36 np0005466030 systemd[1]: Reloading.
Oct  2 07:45:36 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:45:36 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:45:36 np0005466030 systemd[1]: Starting Ceph osd.0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:45:36 np0005466030 podman[78050]: 2025-10-02 11:45:36.909450972 +0000 UTC m=+0.039996211 container create 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:36 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:36 np0005466030 podman[78050]: 2025-10-02 11:45:36.978367735 +0000 UTC m=+0.108912994 container init 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 07:45:36 np0005466030 podman[78050]: 2025-10-02 11:45:36.890665719 +0000 UTC m=+0.021210988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:36 np0005466030 podman[78050]: 2025-10-02 11:45:36.986986519 +0000 UTC m=+0.117531758 container start 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:36 np0005466030 podman[78050]: 2025-10-02 11:45:36.989937579 +0000 UTC m=+0.120482818 container attach 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:45:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:45:37 np0005466030 bash[78050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:45:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:45:37 np0005466030 bash[78050]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:45:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:45:37 np0005466030 bash[78050]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  2 07:45:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:45:37 np0005466030 bash[78050]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  2 07:45:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:37 np0005466030 bash[78050]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:45:37 np0005466030 bash[78050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  2 07:45:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: --> ceph-volume raw activate successful for osd ID: 0
Oct  2 07:45:37 np0005466030 bash[78050]: --> ceph-volume raw activate successful for osd ID: 0
Oct  2 07:45:37 np0005466030 systemd[1]: libpod-7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b.scope: Deactivated successfully.
Oct  2 07:45:37 np0005466030 podman[78050]: 2025-10-02 11:45:37.877696354 +0000 UTC m=+1.008241603 container died 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:37 np0005466030 systemd[1]: var-lib-containers-storage-overlay-fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b-merged.mount: Deactivated successfully.
Oct  2 07:45:37 np0005466030 podman[78050]: 2025-10-02 11:45:37.924635817 +0000 UTC m=+1.055181056 container remove 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  2 07:45:38 np0005466030 podman[78242]: 2025-10-02 11:45:38.132340456 +0000 UTC m=+0.037768084 container create 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:45:38 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:38 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:38 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:38 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:38 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:38 np0005466030 podman[78242]: 2025-10-02 11:45:38.116268016 +0000 UTC m=+0.021695664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:38 np0005466030 podman[78242]: 2025-10-02 11:45:38.221329352 +0000 UTC m=+0.126757010 container init 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  2 07:45:38 np0005466030 podman[78242]: 2025-10-02 11:45:38.239384723 +0000 UTC m=+0.144812371 container start 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:38 np0005466030 bash[78242]: 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef
Oct  2 07:45:38 np0005466030 systemd[1]: Started Ceph osd.0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: pidfile_write: ignore empty --pid-file
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: load: jerasure load: lrc 
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:45:38 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:45:38 np0005466030 podman[78423]: 2025-10-02 11:45:38.940685827 +0000 UTC m=+0.037693031 container create 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  2 07:45:38 np0005466030 systemd[1]: Started libpod-conmon-7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6.scope.
Oct  2 07:45:38 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:39 np0005466030 podman[78423]: 2025-10-02 11:45:39.00860471 +0000 UTC m=+0.105611934 container init 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 07:45:39 np0005466030 podman[78423]: 2025-10-02 11:45:39.01482408 +0000 UTC m=+0.111831284 container start 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:39 np0005466030 podman[78423]: 2025-10-02 11:45:39.018075779 +0000 UTC m=+0.115083013 container attach 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  2 07:45:39 np0005466030 naughty_curran[78440]: 167 167
Oct  2 07:45:39 np0005466030 systemd[1]: libpod-7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6.scope: Deactivated successfully.
Oct  2 07:45:39 np0005466030 podman[78423]: 2025-10-02 11:45:39.02103932 +0000 UTC m=+0.118046524 container died 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:39 np0005466030 podman[78423]: 2025-10-02 11:45:38.925446832 +0000 UTC m=+0.022454056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:39 np0005466030 systemd[1]: var-lib-containers-storage-overlay-c79364f72c8a10f58de88b20c2e9c30a25b389ae38d33567e2cd8768cedcdc89-merged.mount: Deactivated successfully.
Oct  2 07:45:39 np0005466030 podman[78423]: 2025-10-02 11:45:39.058230085 +0000 UTC m=+0.155237289 container remove 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  2 07:45:39 np0005466030 systemd[1]: libpod-conmon-7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6.scope: Deactivated successfully.
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:45:39 np0005466030 podman[78468]: 2025-10-02 11:45:39.207673526 +0000 UTC m=+0.044129918 container create 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:45:39 np0005466030 systemd[1]: Started libpod-conmon-4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1.scope.
Oct  2 07:45:39 np0005466030 podman[78468]: 2025-10-02 11:45:39.186769268 +0000 UTC m=+0.023225690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:39 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:39 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:39 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:39 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:39 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:39 np0005466030 podman[78468]: 2025-10-02 11:45:39.307798132 +0000 UTC m=+0.144254534 container init 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 07:45:39 np0005466030 podman[78468]: 2025-10-02 11:45:39.315829197 +0000 UTC m=+0.152285589 container start 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:45:39 np0005466030 podman[78468]: 2025-10-02 11:45:39.321057767 +0000 UTC m=+0.157514169 container attach 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs mount
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs mount shared_bdev_used = 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Git sha 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: DB SUMMARY
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: DB Session ID:  BRAAO5BX4M9V7H9L0YQQ
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                     Options.env: 0x5594344c9c70
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                Options.info_log: 0x5594336b2ba0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.write_buffer_manager: 0x5594345d2460
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.row_cache: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                              Options.wal_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.wal_compression: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_background_jobs: 4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Compression algorithms supported:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kZSTD supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b25c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b25c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b25c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a8430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539370770, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539370955, "job": 1, "event": "recovery_finished"}
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: freelist init
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: freelist _read_cfg
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs umount
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) close
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs mount
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluefs mount shared_bdev_used = 4718592
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Git sha 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: DB SUMMARY
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: DB Session ID:  BRAAO5BX4M9V7H9L0YQR
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                     Options.env: 0x5594336f4690
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                Options.info_log: 0x5594336b38a0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                                 Options.wal_dir: db.wal
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.write_buffer_manager: 0x5594345d2460
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.row_cache: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                              Options.wal_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.wal_compression: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_background_jobs: 4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Compression algorithms supported:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kZSTD supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b3e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b3e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b3e40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5594336a9770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539641203, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539658499, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2", "db_session_id": "BRAAO5BX4M9V7H9L0YQR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539662597, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2", "db_session_id": "BRAAO5BX4M9V7H9L0YQR", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539665419, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2", "db_session_id": "BRAAO5BX4M9V7H9L0YQR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539666993, "job": 1, "event": "recovery_finished"}
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55943443fc00
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: DB pointer 0x5594345bba00
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: _get_class not permitted to load lua
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: _get_class not permitted to load sdk
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: _get_class not permitted to load test_remote_reads
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0 0 load_pgs
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0 0 load_pgs opened 0 pgs
Oct  2 07:45:39 np0005466030 ceph-osd[78262]: osd.0 0 log_to_monitors true
Oct  2 07:45:39 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0[78258]: 2025-10-02T11:45:39.737+0000 7f15232fa740 -1 osd.0 0 log_to_monitors true
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]: {
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]:    "6e4de194-9f54-490b-9be5-cb1e4c11649b": {
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]:        "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]:        "osd_id": 0,
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]:        "osd_uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b",
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]:        "type": "bluestore"
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]:    }
Oct  2 07:45:40 np0005466030 modest_lumiere[78485]: }
Oct  2 07:45:40 np0005466030 systemd[1]: libpod-4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1.scope: Deactivated successfully.
Oct  2 07:45:40 np0005466030 podman[78468]: 2025-10-02 11:45:40.202024134 +0000 UTC m=+1.038480526 container died 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 07:45:40 np0005466030 systemd[1]: var-lib-containers-storage-overlay-cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b-merged.mount: Deactivated successfully.
Oct  2 07:45:40 np0005466030 podman[78468]: 2025-10-02 11:45:40.25726041 +0000 UTC m=+1.093716802 container remove 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:45:40 np0005466030 systemd[1]: libpod-conmon-4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1.scope: Deactivated successfully.
Oct  2 07:45:40 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  2 07:45:40 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  2 07:45:41 np0005466030 podman[79152]: 2025-10-02 11:45:41.279681535 +0000 UTC m=+0.050921095 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:41 np0005466030 ceph-osd[78262]: osd.0 0 done with init, starting boot process
Oct  2 07:45:41 np0005466030 ceph-osd[78262]: osd.0 0 start_boot
Oct  2 07:45:41 np0005466030 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  2 07:45:41 np0005466030 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  2 07:45:41 np0005466030 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  2 07:45:41 np0005466030 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  2 07:45:41 np0005466030 ceph-osd[78262]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  2 07:45:41 np0005466030 podman[79152]: 2025-10-02 11:45:41.466061314 +0000 UTC m=+0.237300854 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 07:45:42 np0005466030 podman[79341]: 2025-10-02 11:45:42.157832917 +0000 UTC m=+0.035541925 container create 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:42 np0005466030 systemd[1]: Started libpod-conmon-49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def.scope.
Oct  2 07:45:42 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:42 np0005466030 podman[79341]: 2025-10-02 11:45:42.140260821 +0000 UTC m=+0.017969769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:42 np0005466030 podman[79341]: 2025-10-02 11:45:42.784858755 +0000 UTC m=+0.662567713 container init 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:45:42 np0005466030 podman[79341]: 2025-10-02 11:45:42.792081415 +0000 UTC m=+0.669790333 container start 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:42 np0005466030 epic_tu[79357]: 167 167
Oct  2 07:45:42 np0005466030 systemd[1]: libpod-49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def.scope: Deactivated successfully.
Oct  2 07:45:42 np0005466030 podman[79341]: 2025-10-02 11:45:42.807862257 +0000 UTC m=+0.685571195 container attach 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  2 07:45:42 np0005466030 podman[79341]: 2025-10-02 11:45:42.808628041 +0000 UTC m=+0.686336959 container died 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:45:42 np0005466030 systemd[1]: var-lib-containers-storage-overlay-5aa1e38a254794e44c4daf7b95e6b6b51b74f7f9a82639d7693c0761288b8aae-merged.mount: Deactivated successfully.
Oct  2 07:45:42 np0005466030 podman[79341]: 2025-10-02 11:45:42.891656024 +0000 UTC m=+0.769364942 container remove 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct  2 07:45:42 np0005466030 systemd[1]: libpod-conmon-49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def.scope: Deactivated successfully.
Oct  2 07:45:43 np0005466030 podman[79383]: 2025-10-02 11:45:43.038759854 +0000 UTC m=+0.046315544 container create d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 07:45:43 np0005466030 systemd[1]: Started libpod-conmon-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope.
Oct  2 07:45:43 np0005466030 podman[79383]: 2025-10-02 11:45:43.01175138 +0000 UTC m=+0.019307080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:45:43 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:45:43 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:43 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:43 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:43 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:45:43 np0005466030 podman[79383]: 2025-10-02 11:45:43.131707591 +0000 UTC m=+0.139263301 container init d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 07:45:43 np0005466030 podman[79383]: 2025-10-02 11:45:43.13723242 +0000 UTC m=+0.144788110 container start d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:45:43 np0005466030 podman[79383]: 2025-10-02 11:45:43.144567973 +0000 UTC m=+0.152123653 container attach d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]: [
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:    {
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "available": false,
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "ceph_device": false,
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "lsm_data": {},
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "lvs": [],
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "path": "/dev/sr0",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "rejected_reasons": [
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "Has a FileSystem",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "Insufficient space (<5GB)"
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        ],
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        "sys_api": {
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "actuators": null,
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "device_nodes": "sr0",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "devname": "sr0",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "human_readable_size": "482.00 KB",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "id_bus": "ata",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "model": "QEMU DVD-ROM",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "nr_requests": "2",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "parent": "/dev/sr0",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "partitions": {},
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "path": "/dev/sr0",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "removable": "1",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "rev": "2.5+",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "ro": "0",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "rotational": "0",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "sas_address": "",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "sas_device_handle": "",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "scheduler_mode": "mq-deadline",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "sectors": 0,
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "sectorsize": "2048",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "size": 493568.0,
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "support_discard": "2048",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "type": "disk",
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:            "vendor": "QEMU"
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:        }
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]:    }
Oct  2 07:45:44 np0005466030 practical_heyrovsky[79399]: ]
Oct  2 07:45:44 np0005466030 systemd[1]: libpod-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope: Deactivated successfully.
Oct  2 07:45:44 np0005466030 systemd[1]: libpod-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope: Consumed 1.106s CPU time.
Oct  2 07:45:44 np0005466030 podman[80537]: 2025-10-02 11:45:44.281402271 +0000 UTC m=+0.022836288 container died d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:45:44 np0005466030 systemd[1]: var-lib-containers-storage-overlay-54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af-merged.mount: Deactivated successfully.
Oct  2 07:45:44 np0005466030 podman[80537]: 2025-10-02 11:45:44.433851343 +0000 UTC m=+0.175285310 container remove d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:45:44 np0005466030 systemd[1]: libpod-conmon-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope: Deactivated successfully.
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.497 iops: 7039.207 elapsed_sec: 0.426
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: log_channel(cluster) log [WRN] : OSD bench result of 7039.207197 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 0 waiting for initial osdmap
Oct  2 07:45:44 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0[78258]: 2025-10-02T11:45:44.610+0000 7f151f27a640 -1 osd.0 0 waiting for initial osdmap
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 8 set_numa_affinity not setting numa affinity
Oct  2 07:45:44 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0[78258]: 2025-10-02T11:45:44.640+0000 7f151a8a2640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  2 07:45:44 np0005466030 ceph-osd[78262]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  2 07:45:45 np0005466030 ceph-osd[78262]: osd.0 9 state: booting -> active
Oct  2 07:45:46 np0005466030 ceph-osd[78262]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 07:45:46 np0005466030 ceph-osd[78262]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  2 07:45:46 np0005466030 ceph-osd[78262]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 07:45:46 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:45:47 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:09 np0005466030 podman[80692]: 2025-10-02 11:46:09.19090485 +0000 UTC m=+0.047034176 container create 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 07:46:09 np0005466030 systemd[1]: Started libpod-conmon-6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729.scope.
Oct  2 07:46:09 np0005466030 podman[80692]: 2025-10-02 11:46:09.169487397 +0000 UTC m=+0.025616703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:09 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:46:09 np0005466030 podman[80692]: 2025-10-02 11:46:09.280879186 +0000 UTC m=+0.137008502 container init 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:09 np0005466030 podman[80692]: 2025-10-02 11:46:09.291312914 +0000 UTC m=+0.147442200 container start 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:09 np0005466030 podman[80692]: 2025-10-02 11:46:09.295131011 +0000 UTC m=+0.151260317 container attach 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  2 07:46:09 np0005466030 gracious_ramanujan[80708]: 167 167
Oct  2 07:46:09 np0005466030 systemd[1]: libpod-6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729.scope: Deactivated successfully.
Oct  2 07:46:09 np0005466030 podman[80692]: 2025-10-02 11:46:09.297088941 +0000 UTC m=+0.153218227 container died 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 07:46:09 np0005466030 systemd[1]: var-lib-containers-storage-overlay-945293095da3f6cfb9d42e4fce570aec171744e7c8860fa892c0f8d28fdff091-merged.mount: Deactivated successfully.
Oct  2 07:46:09 np0005466030 podman[80692]: 2025-10-02 11:46:09.33572056 +0000 UTC m=+0.191849846 container remove 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:46:09 np0005466030 systemd[1]: libpod-conmon-6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729.scope: Deactivated successfully.
Oct  2 07:46:09 np0005466030 podman[80726]: 2025-10-02 11:46:09.414828904 +0000 UTC m=+0.045998735 container create 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:09 np0005466030 systemd[1]: Started libpod-conmon-8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60.scope.
Oct  2 07:46:09 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:46:09 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:09 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:09 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:09 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:09 np0005466030 podman[80726]: 2025-10-02 11:46:09.390764 +0000 UTC m=+0.021933841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:09 np0005466030 podman[80726]: 2025-10-02 11:46:09.494439904 +0000 UTC m=+0.125609755 container init 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:46:09 np0005466030 podman[80726]: 2025-10-02 11:46:09.499388505 +0000 UTC m=+0.130558326 container start 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 07:46:09 np0005466030 podman[80726]: 2025-10-02 11:46:09.502055786 +0000 UTC m=+0.133225607 container attach 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct  2 07:46:09 np0005466030 systemd[1]: libpod-8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60.scope: Deactivated successfully.
Oct  2 07:46:09 np0005466030 podman[80726]: 2025-10-02 11:46:09.565018708 +0000 UTC m=+0.196188519 container died 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct  2 07:46:09 np0005466030 systemd[1]: var-lib-containers-storage-overlay-cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381-merged.mount: Deactivated successfully.
Oct  2 07:46:09 np0005466030 podman[80726]: 2025-10-02 11:46:09.597479059 +0000 UTC m=+0.228648880 container remove 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 07:46:09 np0005466030 systemd[1]: libpod-conmon-8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60.scope: Deactivated successfully.
Oct  2 07:46:09 np0005466030 systemd[1]: Reloading.
Oct  2 07:46:09 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:09 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:09 np0005466030 systemd[1]: Reloading.
Oct  2 07:46:09 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:09 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:10 np0005466030 systemd[1]: Starting Ceph mon.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:46:10 np0005466030 podman[80906]: 2025-10-02 11:46:10.341033043 +0000 UTC m=+0.034941347 container create 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-1, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:10 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:10 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:10 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:10 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:10 np0005466030 podman[80906]: 2025-10-02 11:46:10.395271708 +0000 UTC m=+0.089180012 container init 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  2 07:46:10 np0005466030 podman[80906]: 2025-10-02 11:46:10.403561571 +0000 UTC m=+0.097469875 container start 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-1, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct  2 07:46:10 np0005466030 bash[80906]: 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10
Oct  2 07:46:10 np0005466030 podman[80906]: 2025-10-02 11:46:10.325791448 +0000 UTC m=+0.019699772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:10 np0005466030 systemd[1]: Started Ceph mon.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: pidfile_write: ignore empty --pid-file
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: load: jerasure load: lrc 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: RocksDB version: 7.9.2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Git sha 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: DB SUMMARY
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: DB Session ID:  FDMBSZ550JKCBY0GVN5D
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: CURRENT file:  CURRENT
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: IDENTITY file:  IDENTITY
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                         Options.error_if_exists: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                       Options.create_if_missing: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                         Options.paranoid_checks: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                                     Options.env: 0x559103106c40
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                                Options.info_log: 0x5591049fefc0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.max_file_opening_threads: 16
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                              Options.statistics: (nil)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                               Options.use_fsync: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                       Options.max_log_file_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                         Options.allow_fallocate: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                        Options.use_direct_reads: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:          Options.create_missing_column_families: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                              Options.db_log_dir: 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                                 Options.wal_dir: 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.advise_random_on_open: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                    Options.write_buffer_manager: 0x559104a0eb40
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                            Options.rate_limiter: (nil)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.unordered_write: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                               Options.row_cache: None
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                              Options.wal_filter: None
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.allow_ingest_behind: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.two_write_queues: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.manual_wal_flush: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.wal_compression: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.atomic_flush: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.log_readahead_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.allow_data_in_errors: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.db_host_id: __hostname__
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.max_background_jobs: 2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.max_background_compactions: -1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.max_subcompactions: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.max_total_wal_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                          Options.max_open_files: -1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                          Options.bytes_per_sync: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:       Options.compaction_readahead_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.max_background_flushes: -1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Compression algorithms supported:
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kZSTD supported: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kXpressCompression supported: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kBZip2Compression supported: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kLZ4Compression supported: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kZlibCompression supported: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: #011kSnappyCompression supported: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:           Options.merge_operator: 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:        Options.compaction_filter: None
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:        Options.compaction_filter_factory: None
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:  Options.sst_partitioner_factory: None
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5591049fec00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5591049f71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:        Options.write_buffer_size: 33554432
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:  Options.max_write_buffer_number: 2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:          Options.compression: NoCompression
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:       Options.prefix_extractor: nullptr
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.num_levels: 7
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.compression_opts.level: 32767
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:               Options.compression_opts.strategy: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                  Options.compression_opts.enabled: false
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                        Options.arena_block_size: 1048576
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.disable_auto_compactions: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.inplace_update_support: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                           Options.bloom_locality: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                    Options.max_successive_merges: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.paranoid_file_checks: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.force_consistency_checks: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.report_bg_io_stats: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                               Options.ttl: 2592000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                       Options.enable_blob_files: false
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                           Options.min_blob_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                          Options.blob_file_size: 268435456
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb:                Options.blob_file_starting_level: 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f39ba2d7-ed25-4935-a0d2-2c1c33353d32
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405570441523, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405570443644, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405570443726, "job": 1, "event": "recovery_finished"}
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559104a20e00
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: DB pointer 0x559104b28000
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(???) e0 preinit fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).mds e1 new map
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-1:/etc/ceph/ceph.conf
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Deploying daemon crash.compute-1 on compute-1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Deploying daemon osd.1 on compute-0
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Deploying daemon osd.0 on compute-1
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: OSD bench result of 9008.887835 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012] boot
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Adjusting osd_memory_target on compute-1 to  5247M
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: OSD bench result of 7039.207197 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Unable to set osd_memory_target on compute-0 to 134065766: error parsing value: Value '134065766' is below minimum 939524096
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485] boot
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-2:/etc/ceph/ceph.conf
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Deploying daemon mon.compute-2 on compute-2
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: Cluster is now healthy
Oct  2 07:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct  2 07:46:16 np0005466030 ceph-mon[80926]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Oct  2 07:46:16 np0005466030 ceph-mon[80926]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct  2 07:46:16 np0005466030 ceph-mon[80926]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  2 07:46:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: Deploying daemon mon.compute-1 on compute-1
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-0 calling monitor election
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T11:46:09.534898Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-0 calling monitor election
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-2 calling monitor election
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-1 calling monitor election
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 07:46:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  2 07:46:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e12 _set_new_cache_sizes cache_size:1019935369 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:20 np0005466030 ceph-mon[80926]: Deploying daemon mgr.compute-2.kvxdhw on compute-2
Oct  2 07:46:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:20 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e13 e13: 2 total, 2 up, 2 in
Oct  2 07:46:20 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e14 e14: 2 total, 2 up, 2 in
Oct  2 07:46:21 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:21 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:46:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  2 07:46:22 np0005466030 podman[81105]: 2025-10-02 11:46:22.038262663 +0000 UTC m=+0.039239849 container create ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:22 np0005466030 systemd[1]: Started libpod-conmon-ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6.scope.
Oct  2 07:46:22 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:46:22 np0005466030 podman[81105]: 2025-10-02 11:46:22.109333042 +0000 UTC m=+0.110310248 container init ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:46:22 np0005466030 podman[81105]: 2025-10-02 11:46:22.01947673 +0000 UTC m=+0.020453946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:22 np0005466030 podman[81105]: 2025-10-02 11:46:22.115777609 +0000 UTC m=+0.116754795 container start ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  2 07:46:22 np0005466030 podman[81105]: 2025-10-02 11:46:22.119478502 +0000 UTC m=+0.120455718 container attach ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:22 np0005466030 blissful_blackwell[81121]: 167 167
Oct  2 07:46:22 np0005466030 systemd[1]: libpod-ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6.scope: Deactivated successfully.
Oct  2 07:46:22 np0005466030 podman[81105]: 2025-10-02 11:46:22.121701989 +0000 UTC m=+0.122679205 container died ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:46:22 np0005466030 systemd[1]: var-lib-containers-storage-overlay-99f43a9ed3c76f8cdef597f312971f5b286a21ad11d076905189caa9b47bb68d-merged.mount: Deactivated successfully.
Oct  2 07:46:22 np0005466030 podman[81105]: 2025-10-02 11:46:22.152722216 +0000 UTC m=+0.153699402 container remove ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 07:46:22 np0005466030 systemd[1]: libpod-conmon-ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6.scope: Deactivated successfully.
Oct  2 07:46:22 np0005466030 systemd[1]: Reloading.
Oct  2 07:46:22 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:22 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:22 np0005466030 systemd[1]: Reloading.
Oct  2 07:46:22 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:46:22 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:46:22 np0005466030 systemd[1]: Starting Ceph mgr.compute-1.wtokkj for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:46:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e15 e15: 2 total, 2 up, 2 in
Oct  2 07:46:22 np0005466030 ceph-mon[80926]: Deploying daemon mgr.compute-1.wtokkj on compute-1
Oct  2 07:46:22 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:22 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:22 np0005466030 podman[81263]: 2025-10-02 11:46:22.919375995 +0000 UTC m=+0.038009892 container create 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:22 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:22 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:22 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:22 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/var/lib/ceph/mgr/ceph-compute-1.wtokkj supports timestamps until 2038 (0x7fffffff)
Oct  2 07:46:22 np0005466030 podman[81263]: 2025-10-02 11:46:22.967221235 +0000 UTC m=+0.085855132 container init 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct  2 07:46:22 np0005466030 podman[81263]: 2025-10-02 11:46:22.974301231 +0000 UTC m=+0.092935128 container start 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:46:22 np0005466030 bash[81263]: 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1
Oct  2 07:46:22 np0005466030 podman[81263]: 2025-10-02 11:46:22.901976724 +0000 UTC m=+0.020610641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:46:22 np0005466030 systemd[1]: Started Ceph mgr.compute-1.wtokkj for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: pidfile_write: ignore empty --pid-file
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'alerts'
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'balancer'
Oct  2 07:46:23 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:23.411+0000 7f8f3746f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 07:46:23 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'cephadm'
Oct  2 07:46:23 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:23.672+0000 7f8f3746f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  2 07:46:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Oct  2 07:46:24 np0005466030 ceph-mon[80926]: Deploying daemon crash.compute-2 on compute-2
Oct  2 07:46:24 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:24 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e17 _set_new_cache_sizes cache_size:1020053289 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:25 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'crash'
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Oct  2 07:46:26 np0005466030 ceph-mgr[81282]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 07:46:26 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'dashboard'
Oct  2 07:46:26 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:26.021+0000 7f8f3746f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  2 07:46:27 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Oct  2 07:46:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Oct  2 07:46:27 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'devicehealth'
Oct  2 07:46:27 np0005466030 ceph-mgr[81282]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 07:46:27 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'diskprediction_local'
Oct  2 07:46:27 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:27.848+0000 7f8f3746f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  2 07:46:28 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:28 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.102:0/3177347678' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct  2 07:46:28 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct  2 07:46:28 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]': finished
Oct  2 07:46:28 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  2 07:46:28 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  2 07:46:28 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]:  from numpy import show_config as show_numpy_config
Oct  2 07:46:28 np0005466030 ceph-mgr[81282]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 07:46:28 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'influx'
Oct  2 07:46:28 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:28.407+0000 7f8f3746f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  2 07:46:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Oct  2 07:46:28 np0005466030 ceph-mgr[81282]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 07:46:28 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'insights'
Oct  2 07:46:28 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:28.666+0000 7f8f3746f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  2 07:46:28 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'iostat'
Oct  2 07:46:29 np0005466030 ceph-mgr[81282]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 07:46:29 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'k8sevents'
Oct  2 07:46:29 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:29.175+0000 7f8f3746f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  2 07:46:29 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:29 np0005466030 ceph-mon[80926]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:29 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Oct  2 07:46:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e22 _set_new_cache_sizes cache_size:1020054714 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:31 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'localpool'
Oct  2 07:46:31 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  2 07:46:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:31 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'mds_autoscaler'
Oct  2 07:46:32 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'mirroring'
Oct  2 07:46:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Oct  2 07:46:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 23 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=13.599875450s) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active pruub 66.031814575s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 23 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=13.599875450s) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown pruub 66.031814575s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:32 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'nfs'
Oct  2 07:46:32 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:32 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:33 np0005466030 ceph-mgr[81282]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'orchestrator'
Oct  2 07:46:33 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:33.106+0000 7f8f3746f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.9( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.8( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.6( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.7( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.4( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.2( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.5( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.3( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.10( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.12( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.11( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.14( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.15( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.16( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.13( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.18( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.19( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.17( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:46:33 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  2 07:46:33 np0005466030 ceph-mgr[81282]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'osd_perf_query'
Oct  2 07:46:33 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:33.820+0000 7f8f3746f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:33 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  2 07:46:34 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'osd_support'
Oct  2 07:46:34 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.113+0000 7f8f3746f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'pg_autoscaler'
Oct  2 07:46:34 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.376+0000 7f8f3746f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:34 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  2 07:46:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:46:34 np0005466030 ceph-mon[80926]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'progress'
Oct  2 07:46:34 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.678+0000 7f8f3746f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'prometheus'
Oct  2 07:46:34 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.955+0000 7f8f3746f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  2 07:46:34 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct  2 07:46:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:35 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  2 07:46:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct  2 07:46:36 np0005466030 ceph-mgr[81282]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 07:46:36 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'rbd_support'
Oct  2 07:46:36 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:36.031+0000 7f8f3746f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  2 07:46:36 np0005466030 ceph-mgr[81282]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 07:46:36 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'restful'
Oct  2 07:46:36 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:36.404+0000 7f8f3746f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  2 07:46:36 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  2 07:46:36 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct  2 07:46:37 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct  2 07:46:37 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'rgw'
Oct  2 07:46:37 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  2 07:46:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  2 07:46:37 np0005466030 ceph-mon[80926]: Deploying daemon osd.2 on compute-2
Oct  2 07:46:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct  2 07:46:37 np0005466030 ceph-mgr[81282]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 07:46:37 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'rook'
Oct  2 07:46:37 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:37.946+0000 7f8f3746f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  2 07:46:38 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  2 07:46:38 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  2 07:46:38 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  2 07:46:39 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  2 07:46:39 np0005466030 ceph-mon[80926]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:46:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct  2 07:46:40 np0005466030 ceph-mgr[81282]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'selftest'
Oct  2 07:46:40 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:40.242+0000 7f8f3746f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:40 np0005466030 ceph-mgr[81282]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'snap_schedule'
Oct  2 07:46:40 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:40.518+0000 7f8f3746f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466030 ceph-mgr[81282]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'stats'
Oct  2 07:46:40 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:40.801+0000 7f8f3746f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  2 07:46:40 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  2 07:46:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct  2 07:46:41 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'status'
Oct  2 07:46:41 np0005466030 ceph-mgr[81282]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 07:46:41 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'telegraf'
Oct  2 07:46:41 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:41.373+0000 7f8f3746f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  2 07:46:41 np0005466030 ceph-mgr[81282]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 07:46:41 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'telemetry'
Oct  2 07:46:41 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:41.630+0000 7f8f3746f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  2 07:46:41 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  2 07:46:41 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  2 07:46:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct  2 07:46:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.1c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.1d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.1a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.5( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.e( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.11( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.10( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.13( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.14( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.16( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092329025s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.469169617s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092301369s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469169617s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092037201s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.469055176s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092014313s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469055176s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091887474s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.469039917s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091865540s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091481209s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468780518s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091461182s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091250420s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468742371s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091231346s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468742371s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091112137s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468704224s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091097832s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468704224s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091105461s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468803406s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091085434s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468803406s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.090168953s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468727112s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089975357s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468574524s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.090146065s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468727112s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089949608s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468574524s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089907646s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468544006s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089863777s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468544006s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089564323s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468338013s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089577675s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468360901s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089547157s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468338013s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089554787s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468360901s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089651108s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468574524s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089611053s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468574524s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089719772s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468719482s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089684486s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468719482s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089271545s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468376160s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:42 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089249611s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468376160s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:42 np0005466030 ceph-mgr[81282]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 07:46:42 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'test_orchestrator'
Oct  2 07:46:42 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:42.314+0000 7f8f3746f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 07:46:42 np0005466030 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  2 07:46:43 np0005466030 ceph-mgr[81282]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:43 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'volumes'
Oct  2 07:46:43 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:43.040+0000 7f8f3746f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  2 07:46:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.18( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.1c( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1b( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1c( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.f( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.5( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.3( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.7( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.2( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.c( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.9( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.f( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.16( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.10( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.13( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.15( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.14( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.11( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.16( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.10( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1f( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:46:43 np0005466030 ceph-mgr[81282]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 07:46:43 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:43.786+0000 7f8f3746f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  2 07:46:43 np0005466030 ceph-mgr[81282]: mgr[py] Loading python module 'zabbix'
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct  2 07:46:43 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct  2 07:46:44 np0005466030 ceph-mgr[81282]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 07:46:44 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:44.068+0000 7f8f3746f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  2 07:46:44 np0005466030 ceph-mgr[81282]: ms_deliver_dispatch: unhandled message 0x55bbe42e3600 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Oct  2 07:46:44 np0005466030 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 07:46:44 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  2 07:46:44 np0005466030 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  2 07:46:44 np0005466030 ceph-mon[80926]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  2 07:46:44 np0005466030 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  2 07:46:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.894192696s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.469039917s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.894192696s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000248909s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575187683s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000205994s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575164795s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000248909s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575187683s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000205994s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575164795s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000073433s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575111389s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000073433s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575111389s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893628120s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468780518s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893628120s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999844551s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575042725s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893508911s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468711853s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999844551s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999811172s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575042725s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999811172s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893508911s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468711853s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999816895s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575080872s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999816895s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575080872s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999673843s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574996948s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999673843s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574996948s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999487877s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574905396s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999487877s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574905396s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893162727s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468612671s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893162727s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468612671s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999329567s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574844360s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999329567s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574844360s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999258995s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574821472s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999205589s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574790955s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999183655s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574775696s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999205589s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574790955s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893005371s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468620300s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999258995s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574821472s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893005371s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468620300s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999115944s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574775696s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999115944s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.887790680s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.463493347s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.887790680s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.463493347s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999016762s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574752808s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999016762s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574752808s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.892520905s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468353271s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.892520905s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468353271s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999183655s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:44 np0005466030 podman[81540]: 2025-10-02 11:46:44.387630802 +0000 UTC m=+0.053311662 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:46:44 np0005466030 podman[81540]: 2025-10-02 11:46:44.482777623 +0000 UTC m=+0.148458463 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:46:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:45 np0005466030 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct  2 07:46:45 np0005466030 ceph-mon[80926]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:46:45 np0005466030 ceph-mon[80926]: Cluster is now healthy
Oct  2 07:46:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:48 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  2 07:46:48 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  2 07:46:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664138794s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575164795s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664088249s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575164795s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664095879s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575187683s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664053917s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575187683s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663953781s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575111389s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663905144s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575111389s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557517052s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557495117s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663733482s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663718224s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557359695s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468711853s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663661003s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557341576s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468711853s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663645744s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663664818s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575080872s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663651466s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575080872s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557121277s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468620300s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557109833s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468620300s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663317680s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574905396s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556999207s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468612671s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556984901s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468612671s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663281441s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574905396s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663154602s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574844360s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663138390s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574844360s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663082123s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574821472s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663067818s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574821472s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662989616s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574790955s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662947655s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662973404s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574790955s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662931442s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662914276s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662919044s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.551541328s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.463493347s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.551506996s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.463493347s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662742615s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574752808s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662725449s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574752808s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556281090s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468353271s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556267738s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468353271s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556897163s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556874275s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662742615s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574996948s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662691116s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574996948s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct  2 07:46:48 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: OSD bench result of 8243.165808 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295] boot
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: Updating compute-0:/etc/ceph/ceph.conf
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: Updating compute-1:/etc/ceph/ceph.conf
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: Updating compute-2:/etc/ceph/ceph.conf
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1687359328' entity='client.admin' 
Oct  2 07:46:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct  2 07:46:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct  2 07:46:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct  2 07:46:50 np0005466030 ceph-mon[80926]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:50 np0005466030 ceph-mon[80926]: Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:50 np0005466030 ceph-mon[80926]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct  2 07:46:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: Saving service ingress.rgw.default spec with placement count:2
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:46:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Oct  2 07:46:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e2 new map
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:46:53.022725+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  2 07:46:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:54 np0005466030 ceph-mon[80926]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  2 07:46:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:46:55 np0005466030 ceph-mon[80926]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  2 07:46:55 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct  2 07:46:55 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct  2 07:46:57 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  2 07:46:57 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  2 07:46:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:46:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:46:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:46:57 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Oct  2 07:46:57 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Oct  2 07:46:58 np0005466030 ceph-mon[80926]: Deploying daemon rgw.rgw.compute-2.tsbazp on compute-2
Oct  2 07:47:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:01 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct  2 07:47:01 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct  2 07:47:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct  2 07:47:02 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/2424414405' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  2 07:47:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:47:03 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 07:47:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:47:03 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  2 07:47:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:03 np0005466030 podman[82744]: 2025-10-02 11:47:03.654183004 +0000 UTC m=+0.049386418 container create 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 07:47:03 np0005466030 systemd[71803]: Starting Mark boot as successful...
Oct  2 07:47:03 np0005466030 systemd[71803]: Finished Mark boot as successful.
Oct  2 07:47:03 np0005466030 systemd[1]: Started libpod-conmon-5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e.scope.
Oct  2 07:47:03 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:47:03 np0005466030 podman[82744]: 2025-10-02 11:47:03.631393491 +0000 UTC m=+0.026596925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:47:03 np0005466030 podman[82744]: 2025-10-02 11:47:03.755061786 +0000 UTC m=+0.150265210 container init 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  2 07:47:03 np0005466030 podman[82744]: 2025-10-02 11:47:03.763706016 +0000 UTC m=+0.158909430 container start 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  2 07:47:03 np0005466030 competent_chatterjee[82761]: 167 167
Oct  2 07:47:03 np0005466030 systemd[1]: libpod-5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e.scope: Deactivated successfully.
Oct  2 07:47:03 np0005466030 podman[82744]: 2025-10-02 11:47:03.788560445 +0000 UTC m=+0.183763859 container attach 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  2 07:47:03 np0005466030 podman[82744]: 2025-10-02 11:47:03.789853185 +0000 UTC m=+0.185056599 container died 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:47:03 np0005466030 systemd[1]: var-lib-containers-storage-overlay-99dab7f558861f108aa9b345ac9684482f011c3dcbaf049e2dbed9aca0231638-merged.mount: Deactivated successfully.
Oct  2 07:47:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct  2 07:47:03 np0005466030 podman[82744]: 2025-10-02 11:47:03.87939214 +0000 UTC m=+0.274595554 container remove 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 07:47:03 np0005466030 systemd[1]: libpod-conmon-5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e.scope: Deactivated successfully.
Oct  2 07:47:03 np0005466030 systemd[1]: Reloading.
Oct  2 07:47:04 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:04 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:04 np0005466030 systemd[1]: Reloading.
Oct  2 07:47:04 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:04 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:04 np0005466030 systemd[1]: Starting Ceph rgw.rgw.compute-1.vuotmz for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:47:04 np0005466030 podman[82903]: 2025-10-02 11:47:04.695474287 +0000 UTC m=+0.051509545 container create c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  2 07:47:04 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:04 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:04 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:04 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.vuotmz supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:04 np0005466030 podman[82903]: 2025-10-02 11:47:04.664365232 +0000 UTC m=+0.020400510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:47:04 np0005466030 podman[82903]: 2025-10-02 11:47:04.761711892 +0000 UTC m=+0.117747160 container init c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 07:47:04 np0005466030 podman[82903]: 2025-10-02 11:47:04.7661194 +0000 UTC m=+0.122154658 container start c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:47:04 np0005466030 bash[82903]: c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32
Oct  2 07:47:04 np0005466030 systemd[1]: Started Ceph rgw.rgw.compute-1.vuotmz for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:47:04 np0005466030 radosgw[82922]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:47:04 np0005466030 radosgw[82922]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  2 07:47:04 np0005466030 radosgw[82922]: framework: beast
Oct  2 07:47:04 np0005466030 radosgw[82922]: framework conf key: endpoint, val: 192.168.122.101:8082
Oct  2 07:47:04 np0005466030 radosgw[82922]: init_numa not setting numa affinity
Oct  2 07:47:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct  2 07:47:04 np0005466030 ceph-mon[80926]: Deploying daemon rgw.rgw.compute-1.vuotmz on compute-1
Oct  2 07:47:04 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  2 07:47:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  2 07:47:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  2 07:47:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:06 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct  2 07:47:06 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: Deploying daemon rgw.rgw.compute-0.hlkvzi on compute-0
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:07 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 40 pg[10.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  2 07:47:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:07 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct  2 07:47:07 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: Deploying daemon haproxy.rgw.default.compute-0.zhecum on compute-0
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  2 07:47:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct  2 07:47:08 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 41 pg[10.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:08 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Oct  2 07:47:08 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Oct  2 07:47:09 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:47:09 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:47:09 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  2 07:47:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct  2 07:47:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  2 07:47:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  2 07:47:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Oct  2 07:47:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  2 07:47:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct  2 07:47:12 np0005466030 radosgw[82922]: LDAP not started since no server URIs were provided in the configuration.
Oct  2 07:47:12 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz[82918]: 2025-10-02T11:47:12.565+0000 7f932dfdc940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  2 07:47:12 np0005466030 radosgw[82922]: framework: beast
Oct  2 07:47:12 np0005466030 radosgw[82922]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  2 07:47:12 np0005466030 radosgw[82922]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  2 07:47:12 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 07:47:12 np0005466030 radosgw[82922]: starting handler: beast
Oct  2 07:47:12 np0005466030 radosgw[82922]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:47:12 np0005466030 radosgw[82922]: mgrc service_daemon_register rgw.24140 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.vuotmz,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=16ba9875-e611-4c67-897a-e19079014af6,zone_name=default,zonegroup_id=407d395c-624c-4136-be08-de285eb61d42,zonegroup_name=default}
Oct  2 07:47:12 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  2 07:47:12 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  2 07:47:12 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  2 07:47:12 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct  2 07:47:12 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  2 07:47:12 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  2 07:47:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:13 np0005466030 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:47:13 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:47:13 np0005466030 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  2 07:47:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct  2 07:47:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct  2 07:47:14 np0005466030 ceph-mon[80926]: Deploying daemon haproxy.rgw.default.compute-2.zptkij on compute-2
Oct  2 07:47:14 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct  2 07:47:14 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct  2 07:47:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:16.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:18.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:19 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct  2 07:47:19 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct  2 07:47:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:20 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Oct  2 07:47:20 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Oct  2 07:47:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:21 np0005466030 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  2 07:47:21 np0005466030 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  2 07:47:21 np0005466030 ceph-mon[80926]: Deploying daemon keepalived.rgw.default.compute-2.emwnjv on compute-2
Oct  2 07:47:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:23 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct  2 07:47:23 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct  2 07:47:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:24.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:47:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:47:24 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct  2 07:47:24 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct  2 07:47:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:26 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  2 07:47:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:47:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:47:26 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  2 07:47:27 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct  2 07:47:27 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct  2 07:47:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:28.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:28 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  2 07:47:28 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  2 07:47:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:29 np0005466030 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  2 07:47:29 np0005466030 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  2 07:47:29 np0005466030 ceph-mon[80926]: Deploying daemon keepalived.rgw.default.compute-0.nghmbz on compute-0
Oct  2 07:47:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:30.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:31 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct  2 07:47:31 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct  2 07:47:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:32.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:33 np0005466030 podman[83758]: 2025-10-02 11:47:33.968696081 +0000 UTC m=+0.138069976 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:47:34 np0005466030 podman[83758]: 2025-10-02 11:47:34.09185799 +0000 UTC m=+0.261231885 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:47:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  2 07:47:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct  2 07:47:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:34 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct  2 07:47:34 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:47:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:36.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct  2 07:47:36 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.836661339s) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active pruub 125.470329285s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:36 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.836661339s) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown pruub 125.470329285s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  2 07:47:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:36.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.15( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.16( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.4( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.12( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.10( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.13( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.11( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.17( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.14( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.8( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.9( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.6( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.5( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.7( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.3( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.2( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.19( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.18( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  2 07:47:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct  2 07:47:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  2 07:47:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:47:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:47:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 51 pg[10.0( v 41'48 (0'0,41'48] local-lis/les=40/41 n=8 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=51 pruub=14.794152260s) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 41'47 mlcod 41'47 active pruub 136.695800781s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 51 pg[10.0( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=51 pruub=14.794152260s) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 41'47 mlcod 0'0 unknown pruub 136.695800781s@ mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1b( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.11( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.18( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.7( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.9( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.12( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.10( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1f( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1e( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1d( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1c( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1a( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.19( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.6( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.5( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.4( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.3( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.b( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.8( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.d( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.a( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.c( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.e( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.f( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.2( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.13( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.14( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.15( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.16( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.17( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  2 07:47:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.9( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1c( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1d( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1a( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.6( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.3( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.d( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.a( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.c( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.0( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 41'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.14( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.15( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.16( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.7( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.17( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:42 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct  2 07:47:42 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct  2 07:47:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:47:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:47:43 np0005466030 ceph-mon[80926]: Deploying daemon mds.cephfs.compute-2.dtavud on compute-2
Oct  2 07:47:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e3 new map
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:44.341938+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.dtavud{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 07:47:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: daemon mds.cephfs.compute-2.dtavud assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: Cluster is now healthy
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: daemon mds.cephfs.compute-2.dtavud is now active in filesystem cephfs as rank 0
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:47:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:47:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  2 07:47:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e4 new map
Oct  2 07:47:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:45.438767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  2 07:47:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  2 07:47:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:45 np0005466030 ceph-mon[80926]: Deploying daemon mds.cephfs.compute-0.yqiqns on compute-0
Oct  2 07:47:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:46.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e5 new map
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:45.438767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e6 new map
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:45.438767+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  2 07:47:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  2 07:47:47 np0005466030 podman[84005]: 2025-10-02 11:47:47.106432789 +0000 UTC m=+0.038568358 container create a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:47:47 np0005466030 systemd[1]: Started libpod-conmon-a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be.scope.
Oct  2 07:47:47 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:47:47 np0005466030 podman[84005]: 2025-10-02 11:47:47.162633715 +0000 UTC m=+0.094769304 container init a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:47:47 np0005466030 podman[84005]: 2025-10-02 11:47:47.169904998 +0000 UTC m=+0.102040577 container start a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 07:47:47 np0005466030 podman[84005]: 2025-10-02 11:47:47.173451816 +0000 UTC m=+0.105587395 container attach a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:47:47 np0005466030 upbeat_vaughan[84021]: 167 167
Oct  2 07:47:47 np0005466030 systemd[1]: libpod-a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be.scope: Deactivated successfully.
Oct  2 07:47:47 np0005466030 podman[84005]: 2025-10-02 11:47:47.174807628 +0000 UTC m=+0.106943227 container died a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 07:47:47 np0005466030 podman[84005]: 2025-10-02 11:47:47.086928324 +0000 UTC m=+0.019063923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:47:47 np0005466030 systemd[1]: var-lib-containers-storage-overlay-e172b143675e0c87f20cfabedbb62eb2220fea5b6c653b814f1255bf8432ccfa-merged.mount: Deactivated successfully.
Oct  2 07:47:47 np0005466030 podman[84005]: 2025-10-02 11:47:47.20992404 +0000 UTC m=+0.142059619 container remove a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:47:47 np0005466030 systemd[1]: libpod-conmon-a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be.scope: Deactivated successfully.
Oct  2 07:47:47 np0005466030 systemd[1]: Reloading.
Oct  2 07:47:47 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:47 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:47 np0005466030 systemd[1]: Reloading.
Oct  2 07:47:47 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:47:47 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:47:47 np0005466030 systemd[1]: Starting Ceph mds.cephfs.compute-1.bhscyq for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct  2 07:47:48 np0005466030 podman[84164]: 2025-10-02 11:47:48.037961224 +0000 UTC m=+0.038124316 container create 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 07:47:48 np0005466030 ceph-mon[80926]: Deploying daemon mds.cephfs.compute-1.bhscyq on compute-1
Oct  2 07:47:48 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:48 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:48 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:48 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.bhscyq supports timestamps until 2038 (0x7fffffff)
Oct  2 07:47:48 np0005466030 podman[84164]: 2025-10-02 11:47:48.113042777 +0000 UTC m=+0.113205879 container init 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  2 07:47:48 np0005466030 podman[84164]: 2025-10-02 11:47:48.020433649 +0000 UTC m=+0.020596761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:47:48 np0005466030 podman[84164]: 2025-10-02 11:47:48.118436901 +0000 UTC m=+0.118599993 container start 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:47:48 np0005466030 bash[84164]: 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c
Oct  2 07:47:48 np0005466030 systemd[1]: Started Ceph mds.cephfs.compute-1.bhscyq for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct  2 07:47:48 np0005466030 ceph-mds[84183]: set uid:gid to 167:167 (ceph:ceph)
Oct  2 07:47:48 np0005466030 ceph-mds[84183]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  2 07:47:48 np0005466030 ceph-mds[84183]: main not setting numa affinity
Oct  2 07:47:48 np0005466030 ceph-mds[84183]: pidfile_write: ignore empty --pid-file
Oct  2 07:47:48 np0005466030 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq[84179]: starting mds.cephfs.compute-1.bhscyq at 
Oct  2 07:47:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:48.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct  2 07:47:48 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Updating MDS map to version 6 from mon.2
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.19( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1d( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.7( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.5( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.8( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.17( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329748154s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848831177s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329705238s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848831177s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.15( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548415184s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 52'50 active pruub 138.067703247s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.15( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548357964s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 0'0 unknown NOTIFY pruub 138.067703247s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.14( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548274994s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 52'50 active pruub 138.067687988s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.14( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548229218s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 0'0 unknown NOTIFY pruub 138.067687988s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329261780s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848815918s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329158783s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848815918s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328984261s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848663330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547948837s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067672729s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328922272s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848663330s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547911644s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067672729s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328830719s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848678589s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547788620s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067672729s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328777313s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848678589s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547757149s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067672729s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328403473s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848556519s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547414780s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067642212s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328350067s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848556519s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547387123s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067642212s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328087807s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848464966s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328059196s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848464966s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327667236s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848220825s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546947479s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067520142s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327629089s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848220825s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546919823s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067520142s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328060150s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848815918s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328017235s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848815918s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327173233s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848052979s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.3( v 52'51 (0'0,52'51] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546610832s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 52'50 active pruub 138.067504883s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546586037s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067504883s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327144623s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848052979s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.3( v 52'51 (0'0,52'51] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546566963s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 0'0 unknown NOTIFY pruub 138.067504883s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546521187s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067504883s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326650620s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847885132s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326511383s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847793579s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546118736s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067413330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326592445s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847885132s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326425552s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847717285s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326475143s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847793579s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546095848s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067413330s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326395035s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847717285s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545853615s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067321777s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326023102s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847549438s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545831680s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067321777s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325993538s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847549438s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325842857s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847412109s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326825142s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847946167s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325815201s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847412109s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325885773s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847534180s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325865746s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847534180s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545518875s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067214966s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325510025s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847305298s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545374870s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067184448s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545417786s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067214966s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325470924s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847305298s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325381279s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847229004s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545350075s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067184448s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325360298s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847229004s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545221329s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067184448s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545179367s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067184448s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325210571s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847229004s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325180054s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847229004s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544771194s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.066909790s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326684952s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848831177s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544737816s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.066909790s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326654434s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848831177s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.324884415s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847106934s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544631004s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.066848755s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544609070s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.066848755s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532390594s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.054718018s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532342911s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.054718018s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532333374s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.054794312s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.324854851s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847106934s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532307625s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.054794312s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325004578s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847946167s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:47:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:47:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:47:49 np0005466030 podman[84423]: 2025-10-02 11:47:49.468377193 +0000 UTC m=+0.055525617 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e7 new map
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:49.237571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:49 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Updating MDS map to version 7 from mon.2
Oct  2 07:47:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct  2 07:47:49 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Monitors have assigned me to become a standby.
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.12( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.19( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1d( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.18( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.4( v 37'4 (0'0,37'4] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.7( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.5( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.8( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.17( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.14( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.10( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.1b( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:49 np0005466030 podman[84423]: 2025-10-02 11:47:49.588560192 +0000 UTC m=+0.175708606 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:47:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:50.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:50.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e8 new map
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:49.237571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:47:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:52.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:52 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct  2 07:47:52 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct  2 07:47:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e9 new map
Oct  2 07:47:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-02T11:46:53.022688+0000#012modified#0112025-10-02T11:47:49.237571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct  2 07:47:52 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Updating MDS map to version 9 from mon.2
Oct  2 07:47:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Oct  2 07:47:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Oct  2 07:47:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:54.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:47:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 07:47:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  2 07:47:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:47:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:56.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:47:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct  2 07:47:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct  2 07:47:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:47:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:56.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:47:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct  2 07:47:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 07:47:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  2 07:47:57 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Oct  2 07:47:57 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Oct  2 07:47:57 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:57 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:57 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:57 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 07:47:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  2 07:47:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:47:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:58.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:47:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.2( empty local-lis/les=55/56 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.e( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.a( v 52'1 (0'0,52'1] local-lis/les=55/56 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=52'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:58 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.6( v 53'1 lc 0'0 (0'0,53'1] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:47:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:47:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:58.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:47:59 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct  2 07:47:59 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  2 07:47:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct  2 07:47:59 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.3( v 52'2 lc 0'0 (0'0,52'2] local-lis/les=56/57 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:59 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.f( v 52'5 lc 52'1 (0'0,52'5] local-lis/les=56/57 n=3 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:59 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.7( v 52'2 lc 52'1 (0'0,52'2] local-lis/les=56/57 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:47:59 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.b( v 52'3 lc 0'0 (0'0,52'3] local-lis/les=56/57 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:00.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: Reconfiguring mgr.compute-0.unmtoh (monmap changed)...
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct  2 07:48:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:01 np0005466030 ceph-mon[80926]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  2 07:48:01 np0005466030 ceph-mon[80926]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  2 07:48:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  2 07:48:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct  2 07:48:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:02.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:02 np0005466030 podman[84846]: 2025-10-02 11:48:02.41030056 +0000 UTC m=+0.036072092 container create c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: Reconfiguring osd.1 (monmap changed)...
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: Reconfiguring daemon osd.1 on compute-0
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  2 07:48:02 np0005466030 ceph-mon[80926]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  2 07:48:02 np0005466030 systemd[1]: Started libpod-conmon-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope.
Oct  2 07:48:02 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:48:02 np0005466030 podman[84846]: 2025-10-02 11:48:02.395016473 +0000 UTC m=+0.020788035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:48:02 np0005466030 podman[84846]: 2025-10-02 11:48:02.493067918 +0000 UTC m=+0.118839470 container init c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 07:48:02 np0005466030 podman[84846]: 2025-10-02 11:48:02.50003948 +0000 UTC m=+0.125811012 container start c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  2 07:48:02 np0005466030 podman[84846]: 2025-10-02 11:48:02.503089963 +0000 UTC m=+0.128861525 container attach c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 07:48:02 np0005466030 exciting_kirch[84862]: 167 167
Oct  2 07:48:02 np0005466030 systemd[1]: libpod-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope: Deactivated successfully.
Oct  2 07:48:02 np0005466030 conmon[84862]: conmon c8414b0e69d120cf0e8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope/container/memory.events
Oct  2 07:48:02 np0005466030 podman[84846]: 2025-10-02 11:48:02.5059402 +0000 UTC m=+0.131711732 container died c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:48:02 np0005466030 systemd[1]: var-lib-containers-storage-overlay-dee8b04d7bc5a89e524340770c81017d6918c092cda763fd8ec935dcb90901f3-merged.mount: Deactivated successfully.
Oct  2 07:48:02 np0005466030 podman[84846]: 2025-10-02 11:48:02.545030744 +0000 UTC m=+0.170802276 container remove c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:48:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:02 np0005466030 systemd[1]: libpod-conmon-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope: Deactivated successfully.
Oct  2 07:48:03 np0005466030 podman[84998]: 2025-10-02 11:48:03.145382156 +0000 UTC m=+0.036703942 container create ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 07:48:03 np0005466030 systemd[1]: Started libpod-conmon-ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96.scope.
Oct  2 07:48:03 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:48:03 np0005466030 podman[84998]: 2025-10-02 11:48:03.198730325 +0000 UTC m=+0.090052131 container init ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 07:48:03 np0005466030 podman[84998]: 2025-10-02 11:48:03.203431599 +0000 UTC m=+0.094753385 container start ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:48:03 np0005466030 podman[84998]: 2025-10-02 11:48:03.20642491 +0000 UTC m=+0.097746696 container attach ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct  2 07:48:03 np0005466030 dreamy_booth[85015]: 167 167
Oct  2 07:48:03 np0005466030 systemd[1]: libpod-ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96.scope: Deactivated successfully.
Oct  2 07:48:03 np0005466030 podman[84998]: 2025-10-02 11:48:03.207795122 +0000 UTC m=+0.099116908 container died ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  2 07:48:03 np0005466030 podman[84998]: 2025-10-02 11:48:03.129297215 +0000 UTC m=+0.020619021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:48:03 np0005466030 systemd[1]: var-lib-containers-storage-overlay-c2513a7702a38d7e0b3e753ba39b3d2ef01fe13587df930adde07c647000c78e-merged.mount: Deactivated successfully.
Oct  2 07:48:03 np0005466030 podman[84998]: 2025-10-02 11:48:03.24736865 +0000 UTC m=+0.138690436 container remove ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  2 07:48:03 np0005466030 systemd[1]: libpod-conmon-ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96.scope: Deactivated successfully.
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: Reconfiguring osd.0 (monmap changed)...
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: Reconfiguring daemon osd.0 on compute-1
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:48:03 np0005466030 podman[85159]: 2025-10-02 11:48:03.86301592 +0000 UTC m=+0.034652750 container create 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:48:03 np0005466030 systemd[1]: Started libpod-conmon-7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16.scope.
Oct  2 07:48:03 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:48:03 np0005466030 podman[85159]: 2025-10-02 11:48:03.930470959 +0000 UTC m=+0.102107899 container init 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 07:48:03 np0005466030 podman[85159]: 2025-10-02 11:48:03.937865746 +0000 UTC m=+0.109502576 container start 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  2 07:48:03 np0005466030 jolly_saha[85175]: 167 167
Oct  2 07:48:03 np0005466030 podman[85159]: 2025-10-02 11:48:03.942838017 +0000 UTC m=+0.114474877 container attach 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  2 07:48:03 np0005466030 podman[85159]: 2025-10-02 11:48:03.846237598 +0000 UTC m=+0.017874448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 07:48:03 np0005466030 podman[85159]: 2025-10-02 11:48:03.944121367 +0000 UTC m=+0.115758197 container died 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:48:03 np0005466030 systemd[1]: libpod-7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16.scope: Deactivated successfully.
Oct  2 07:48:03 np0005466030 systemd[1]: var-lib-containers-storage-overlay-391ae3521f57565f9a1007637c7bb2c0736f9f8d294347b04a392e73e9a5daa2-merged.mount: Deactivated successfully.
Oct  2 07:48:03 np0005466030 podman[85159]: 2025-10-02 11:48:03.979901249 +0000 UTC m=+0.151538079 container remove 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:48:03 np0005466030 systemd[1]: libpod-conmon-7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16.scope: Deactivated successfully.
Oct  2 07:48:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:04.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:04 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct  2 07:48:04 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct  2 07:48:04 np0005466030 ceph-mon[80926]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  2 07:48:04 np0005466030 ceph-mon[80926]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  2 07:48:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  2 07:48:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:05 np0005466030 podman[85364]: 2025-10-02 11:48:05.443608944 +0000 UTC m=+0.054151805 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 07:48:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:05 np0005466030 ceph-mon[80926]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  2 07:48:05 np0005466030 ceph-mon[80926]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  2 07:48:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:05 np0005466030 podman[85364]: 2025-10-02 11:48:05.558683647 +0000 UTC m=+0.169226508 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  2 07:48:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:06.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:06 np0005466030 systemd[1]: session-20.scope: Deactivated successfully.
Oct  2 07:48:06 np0005466030 systemd[1]: session-20.scope: Consumed 7.997s CPU time.
Oct  2 07:48:06 np0005466030 systemd-logind[795]: Session 20 logged out. Waiting for processes to exit.
Oct  2 07:48:06 np0005466030 systemd-logind[795]: Removed session 20.
Oct  2 07:48:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:06.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 07:48:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  2 07:48:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  2 07:48:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct  2 07:48:08 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:08 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 07:48:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  2 07:48:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct  2 07:48:09 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 63 pg[6.5( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=62/63 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:09 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 63 pg[6.d( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=62/63 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:10.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct  2 07:48:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct  2 07:48:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:12.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct  2 07:48:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:14.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:48:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 07:48:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  2 07:48:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.6( v 53'1 (0'0,53'1] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278901100s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=53'1 mlcod 53'1 active pruub 170.618331909s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278839111s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=52'3 mlcod 52'3 active pruub 170.618286133s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278769493s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 170.618286133s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:16 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.6( v 53'1 (0'0,53'1] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278729439s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=53'1 mlcod 0'0 unknown NOTIFY pruub 170.618331909s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 07:48:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  2 07:48:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:17 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct  2 07:48:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:18.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:48:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:20.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:48:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct  2 07:48:20 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:20 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:20 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:20 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:20.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:21 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct  2 07:48:21 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct  2 07:48:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:22.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct  2 07:48:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 07:48:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  2 07:48:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 07:48:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  2 07:48:23 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct  2 07:48:23 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct  2 07:48:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct  2 07:48:24 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 73 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=73) [0] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:24 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 07:48:24 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  2 07:48:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:24.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:25 np0005466030 systemd-logind[795]: New session 34 of user zuul.
Oct  2 07:48:25 np0005466030 systemd[1]: Started Session 34 of User zuul.
Oct  2 07:48:25 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Oct  2 07:48:25 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Oct  2 07:48:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct  2 07:48:25 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 74 pg[6.8( empty local-lis/les=73/74 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=73) [0] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 07:48:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  2 07:48:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:26 np0005466030 python3.9[85689]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:48:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:26.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct  2 07:48:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 07:48:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  2 07:48:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:26.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:27 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct  2 07:48:27 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct  2 07:48:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct  2 07:48:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 07:48:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  2 07:48:27 np0005466030 python3.9[85903]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:48:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:28.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 3314933000854323200, adjusting msgr requires
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct  2 07:48:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:28.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:28 np0005466030 ceph-osd[78262]: osd.0 77 crush map has features 432629239337189376, adjusting msgr requires for clients
Oct  2 07:48:28 np0005466030 ceph-osd[78262]: osd.0 77 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Oct  2 07:48:28 np0005466030 ceph-osd[78262]: osd.0 77 crush map has features 3314933000854323200, adjusting msgr requires for osds
Oct  2 07:48:28 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77) [0] r=0 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:28 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77) [0] r=0 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:28 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:28 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]: dispatch
Oct  2 07:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]: dispatch
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct  2 07:48:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 07:48:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  2 07:48:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]': finished
Oct  2 07:48:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]': finished
Oct  2 07:48:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:30.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct  2 07:48:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:30 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 79 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=56/57 n=1 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=8.777896881s) [1] r=-1 lpr=79 pi=[56,79)/1 crt=52'3 mlcod 52'3 active pruub 179.649734497s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:30 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 79 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=56/57 n=1 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=8.777853012s) [1] r=-1 lpr=79 pi=[56,79)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 179.649734497s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:30.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 07:48:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  2 07:48:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 07:48:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  2 07:48:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:31 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:48:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:32.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:48:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct  2 07:48:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:32.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:34.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:34.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:35 np0005466030 systemd[1]: session-34.scope: Deactivated successfully.
Oct  2 07:48:35 np0005466030 systemd[1]: session-34.scope: Consumed 8.018s CPU time.
Oct  2 07:48:35 np0005466030 systemd-logind[795]: Session 34 logged out. Waiting for processes to exit.
Oct  2 07:48:35 np0005466030 systemd-logind[795]: Removed session 34.
Oct  2 07:48:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:36.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:36.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct  2 07:48:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 07:48:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  2 07:48:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:38.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:38.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 07:48:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  2 07:48:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct  2 07:48:40 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 83 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83) [0] r=0 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:40 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 83 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83) [0] r=0 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 07:48:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  2 07:48:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:48:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:40.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:48:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:40.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct  2 07:48:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:41 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 07:48:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  2 07:48:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Oct  2 07:48:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Oct  2 07:48:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct  2 07:48:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 07:48:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  2 07:48:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 07:48:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  2 07:48:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:42.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:42.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 85 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=67/67 les/c/f=68/68/0 sis=85) [0] r=0 lpr=85 pi=[67,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:43 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[6.e( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=85/86 n=1 ec=47/21 lis/c=67/67 les/c/f=68/68/0 sis=85) [0] r=0 lpr=85 pi=[67,85)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Oct  2 07:48:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:44.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct  2 07:48:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 3314933000852226048, adjusting msgr requires
Oct  2 07:48:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:48:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:48:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: osd.0 87 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: osd.0 87 crush map has features 288514051259236352 was 432629239337198081, adjusting msgr requires for mons
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: osd.0 87 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=56/57 n=3 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=87 pruub=10.973770142s) [1] r=-1 lpr=87 pi=[56,87)/1 crt=52'5 mlcod 52'5 active pruub 195.647384644s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=56/57 n=3 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=87 pruub=10.973541260s) [1] r=-1 lpr=87 pi=[56,87)/1 crt=52'5 mlcod 0'0 unknown NOTIFY pruub 195.647384644s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 07:48:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:44 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:44.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct  2 07:48:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct  2 07:48:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct  2 07:48:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 07:48:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  2 07:48:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:46 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct  2 07:48:46 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct  2 07:48:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:46.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct  2 07:48:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  2 07:48:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:46.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  2 07:48:47 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 89 pg[9.10( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=89) [0] r=0 lpr=89 pi=[49,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:48 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  2 07:48:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct  2 07:48:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 90 pg[9.10( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0]/[1] r=-1 lpr=90 pi=[49,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 90 pg[9.10( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0]/[1] r=-1 lpr=90 pi=[49,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:48 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 90 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0] r=0 lpr=90 pi=[49,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:48.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct  2 07:48:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 91 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[49,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:49 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 91 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[49,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  2 07:48:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Oct  2 07:48:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Oct  2 07:48:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct  2 07:48:50 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 92 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:50 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 92 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:50 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 92 pg[9.12( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  2 07:48:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  2 07:48:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:48:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:50.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:48:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct  2 07:48:51 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93) [0] r=0 lpr=93 pi=[49,93)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:51 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93) [0] r=0 lpr=93 pi=[49,93)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:51 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.12( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[49,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:51 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.12( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[49,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:48:51 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=92/93 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:51 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct  2 07:48:51 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct  2 07:48:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct  2 07:48:52 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 94 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=93/94 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93) [0] r=0 lpr=93 pi=[49,93)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:52 np0005466030 systemd-logind[795]: New session 35 of user zuul.
Oct  2 07:48:52 np0005466030 systemd[1]: Started Session 35 of User zuul.
Oct  2 07:48:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:52.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:53 np0005466030 python3.9[86113]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  2 07:48:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct  2 07:48:53 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 95 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95) [0] r=0 lpr=95 pi=[49,95)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:48:53 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 95 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95) [0] r=0 lpr=95 pi=[49,95)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:48:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:54.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct  2 07:48:54 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 96 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=95/96 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95) [0] r=0 lpr=95 pi=[49,95)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:48:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:54.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:54 np0005466030 python3.9[86287]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:48:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:48:55 np0005466030 python3.9[86443]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:48:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999988s ======
Oct  2 07:48:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999988s
Oct  2 07:48:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:56.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:56 np0005466030 python3.9[86596]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:48:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct  2 07:48:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct  2 07:48:57 np0005466030 python3.9[86750]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:48:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct  2 07:48:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:48:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:48:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  2 07:48:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:48:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:48:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:58.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:48:58 np0005466030 python3.9[86900]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:48:58 np0005466030 network[86917]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:48:58 np0005466030 network[86918]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:48:58 np0005466030 network[86919]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:48:58 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct  2 07:48:58 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct  2 07:48:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  2 07:49:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:00.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct  2 07:49:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  2 07:49:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  2 07:49:01 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct  2 07:49:01 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct  2 07:49:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:02 np0005466030 python3.9[87182]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  2 07:49:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct  2 07:49:02 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=99) [0] r=0 lpr=99 pi=[65,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:03 np0005466030 python3.9[87332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:49:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  2 07:49:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct  2 07:49:03 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[65,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:03 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[65,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:04.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:04 np0005466030 python3.9[87486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:49:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:04.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  2 07:49:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct  2 07:49:04 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 101 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=11.498471260s) [2] r=-1 lpr=101 pi=[70,101)/1 crt=44'1012 mlcod 0'0 active pruub 216.587463379s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:04 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 101 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=11.498307228s) [2] r=-1 lpr=101 pi=[70,101)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 216.587463379s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:05 np0005466030 python3.9[87644]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:49:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  2 07:49:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct  2 07:49:05 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] r=0 lpr=102 pi=[70,102)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:05 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] r=0 lpr=102 pi=[70,102)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:05 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102) [0] r=0 lpr=102 pi=[65,102)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:05 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102) [0] r=0 lpr=102 pi=[65,102)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:06.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:06 np0005466030 python3.9[87728]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:49:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct  2 07:49:07 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 103 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102) [0] r=0 lpr=102 pi=[65,102)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:07 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 103 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[70,102)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:07 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct  2 07:49:07 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct  2 07:49:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct  2 07:49:08 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 104 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.901468277s) [2] async=[2] r=-1 lpr=104 pi=[70,104)/1 crt=44'1012 mlcod 44'1012 active pruub 223.467895508s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:08 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 104 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.901384354s) [2] r=-1 lpr=104 pi=[70,104)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 223.467895508s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:08.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:08.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct  2 07:49:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:10.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:10.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct  2 07:49:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct  2 07:49:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct  2 07:49:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:49:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:12.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:49:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  2 07:49:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:12 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct  2 07:49:12 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct  2 07:49:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  2 07:49:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct  2 07:49:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:14.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  2 07:49:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:14 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct  2 07:49:14 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct  2 07:49:15 np0005466030 podman[87969]: 2025-10-02 11:49:15.011464064 +0000 UTC m=+0.067951652 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 07:49:15 np0005466030 podman[87969]: 2025-10-02 11:49:15.105573952 +0000 UTC m=+0.162061520 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  2 07:49:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  2 07:49:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  2 07:49:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct  2 07:49:16 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct  2 07:49:16 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct  2 07:49:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  2 07:49:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:49:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:49:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct  2 07:49:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:18.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  2 07:49:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct  2 07:49:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 110 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=110 pruub=9.366613388s) [1] r=-1 lpr=110 pi=[80,110)/1 crt=44'1012 mlcod 0'0 active pruub 228.798370361s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 110 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=110 pruub=9.366524696s) [1] r=-1 lpr=110 pi=[80,110)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 228.798370361s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct  2 07:49:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 111 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=0 lpr=111 pi=[80,111)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:19 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 111 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=0 lpr=111 pi=[80,111)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  2 07:49:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct  2 07:49:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:20 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 112 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=111/112 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] async=[1] r=0 lpr=111 pi=[80,111)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  2 07:49:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  2 07:49:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct  2 07:49:21 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 113 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=111/112 n=5 ec=49/38 lis/c=111/80 les/c/f=112/81/0 sis=113 pruub=14.892210007s) [1] async=[1] r=-1 lpr=113 pi=[80,113)/1 crt=44'1012 mlcod 44'1012 active pruub 236.792495728s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:21 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 113 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=111/112 n=5 ec=49/38 lis/c=111/80 les/c/f=112/81/0 sis=113 pruub=14.892071724s) [1] r=-1 lpr=113 pi=[80,113)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 236.792495728s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:21 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct  2 07:49:21 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct  2 07:49:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999988s ======
Oct  2 07:49:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:22.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999988s
Oct  2 07:49:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct  2 07:49:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct  2 07:49:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.337771) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765337854, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7188, "num_deletes": 255, "total_data_size": 13612807, "memory_usage": 13840080, "flush_reason": "Manual Compaction"}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765399451, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7980851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 237, "largest_seqno": 7193, "table_properties": {"data_size": 7953161, "index_size": 18103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 79917, "raw_average_key_size": 23, "raw_value_size": 7887090, "raw_average_value_size": 2320, "num_data_blocks": 802, "num_entries": 3399, "num_filter_entries": 3399, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 1759405570, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 61744 microseconds, and 15498 cpu microseconds.
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.399521) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7980851 bytes OK
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.399540) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.404740) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.404798) EVENT_LOG_v1 {"time_micros": 1759405765404786, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.404824) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13575364, prev total WAL file size 13575364, number of live WAL files 2.
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.408581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7793KB) 8(1648B)]
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765408701, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7982499, "oldest_snapshot_seqno": -1}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3148 keys, 7977361 bytes, temperature: kUnknown
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765461592, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7977361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7950335, "index_size": 18084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75739, "raw_average_key_size": 24, "raw_value_size": 7887370, "raw_average_value_size": 2505, "num_data_blocks": 802, "num_entries": 3148, "num_filter_entries": 3148, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.461912) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7977361 bytes
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.464185) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.6 rd, 150.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.6, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3404, records dropped: 256 output_compression: NoCompression
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.464222) EVENT_LOG_v1 {"time_micros": 1759405765464208, "job": 4, "event": "compaction_finished", "compaction_time_micros": 52993, "compaction_time_cpu_micros": 16173, "output_level": 6, "num_output_files": 1, "total_output_size": 7977361, "num_input_records": 3404, "num_output_records": 3148, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765466068, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765466148, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.408441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:49:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  2 07:49:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct  2 07:49:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:26.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:26 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct  2 07:49:26 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct  2 07:49:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  2 07:49:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct  2 07:49:28 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 118 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=118 pruub=11.965606689s) [2] r=-1 lpr=118 pi=[86,118)/1 crt=44'1012 mlcod 0'0 active pruub 240.732025146s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:28 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 118 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=118 pruub=11.965219498s) [2] r=-1 lpr=118 pi=[86,118)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 240.732025146s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  2 07:49:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:28.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:28 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  2 07:49:28 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  2 07:49:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct  2 07:49:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 119 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] r=0 lpr=119 pi=[86,119)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:29 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 119 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] r=0 lpr=119 pi=[86,119)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  2 07:49:29 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Oct  2 07:49:29 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Oct  2 07:49:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:30.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct  2 07:49:30 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 120 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=119/120 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] async=[2] r=0 lpr=119 pi=[86,119)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:30 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  2 07:49:30 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  2 07:49:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct  2 07:49:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 121 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=119/120 n=5 ec=49/38 lis/c=119/86 les/c/f=120/87/0 sis=121 pruub=14.563700676s) [2] async=[2] r=-1 lpr=121 pi=[86,121)/1 crt=44'1012 mlcod 44'1012 active pruub 246.896209717s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 121 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=119/120 n=5 ec=49/38 lis/c=119/86 les/c/f=120/87/0 sis=121 pruub=14.563613892s) [2] r=-1 lpr=121 pi=[86,121)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 246.896209717s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:32.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct  2 07:49:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 122 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=122 pruub=15.415595055s) [1] r=-1 lpr=122 pi=[70,122)/1 crt=44'1012 mlcod 0'0 active pruub 248.588150024s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:32 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 122 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=122 pruub=15.415527344s) [1] r=-1 lpr=122 pi=[70,122)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 248.588150024s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  2 07:49:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  2 07:49:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct  2 07:49:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 123 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] r=0 lpr=123 pi=[70,123)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:33 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 123 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] r=0 lpr=123 pi=[70,123)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:34.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct  2 07:49:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  2 07:49:35 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 124 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=124 pruub=9.390641212s) [1] r=-1 lpr=124 pi=[80,124)/1 crt=44'1012 mlcod 0'0 active pruub 244.799011230s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:35 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 124 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=124 pruub=9.390583992s) [1] r=-1 lpr=124 pi=[80,124)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 244.799011230s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:35 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 124 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=123/124 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[70,123)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  2 07:49:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct  2 07:49:36 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] r=0 lpr=125 pi=[80,125)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:36 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] r=0 lpr=125 pi=[80,125)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  2 07:49:36 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=123/124 n=5 ec=49/38 lis/c=123/70 les/c/f=124/71/0 sis=125 pruub=15.259602547s) [1] async=[1] r=-1 lpr=125 pi=[70,125)/1 crt=44'1012 mlcod 44'1012 active pruub 251.688507080s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:36 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=123/124 n=5 ec=49/38 lis/c=123/70 les/c/f=124/71/0 sis=125 pruub=15.259287834s) [1] r=-1 lpr=125 pi=[70,125)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 251.688507080s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:36.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:36 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  2 07:49:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:36.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:36 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  2 07:49:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct  2 07:49:37 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 126 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=125/126 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] async=[1] r=0 lpr=125 pi=[80,125)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  2 07:49:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:38.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct  2 07:49:38 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 127 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=125/126 n=5 ec=49/38 lis/c=125/80 les/c/f=126/81/0 sis=127 pruub=15.016470909s) [1] async=[1] r=-1 lpr=127 pi=[80,127)/1 crt=44'1012 mlcod 44'1012 active pruub 253.799392700s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  2 07:49:38 np0005466030 ceph-osd[78262]: osd.0 pg_epoch: 127 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=125/126 n=5 ec=49/38 lis/c=125/80 les/c/f=126/81/0 sis=127 pruub=15.016346931s) [1] r=-1 lpr=127 pi=[80,127)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 253.799392700s@ mbc={}] state<Start>: transitioning to Stray
Oct  2 07:49:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct  2 07:49:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:40.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct  2 07:49:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:40.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct  2 07:49:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct  2 07:49:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct  2 07:49:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:42.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:44.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:44.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:48.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:50 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Oct  2 07:49:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:50 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Oct  2 07:49:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:49:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:49:51 np0005466030 python3.9[88503]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:49:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:52.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct  2 07:49:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct  2 07:49:53 np0005466030 python3.9[88790]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  2 07:49:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:54 np0005466030 python3.9[88942]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  2 07:49:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:49:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:49:55 np0005466030 python3.9[89094]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:49:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:49:56 np0005466030 python3.9[89246]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  2 07:49:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct  2 07:49:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct  2 07:49:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:56.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:57 np0005466030 python3.9[89398]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:49:58 np0005466030 python3.9[89550]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:49:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:49:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:49:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:49:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:49:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:58.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:49:58 np0005466030 python3.9[89628]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:50:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:00.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:00 np0005466030 python3.9[89780]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  2 07:50:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:00.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 07:50:01 np0005466030 python3.9[89933]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  2 07:50:02 np0005466030 python3.9[90086]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:50:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:02.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:02.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:03 np0005466030 python3.9[90238]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  2 07:50:04 np0005466030 python3.9[90390]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:04.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:06 np0005466030 python3.9[90543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:06.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:07 np0005466030 python3.9[90695]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:07 np0005466030 python3.9[90773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:08 np0005466030 python3.9[90925]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:50:08 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct  2 07:50:08 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct  2 07:50:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:08 np0005466030 python3.9[91003]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:50:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:09 np0005466030 python3.9[91155]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:10.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:10 np0005466030 systemd[71803]: Created slice User Background Tasks Slice.
Oct  2 07:50:10 np0005466030 systemd[71803]: Starting Cleanup of User's Temporary Files and Directories...
Oct  2 07:50:10 np0005466030 systemd[71803]: Finished Cleanup of User's Temporary Files and Directories.
Oct  2 07:50:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  2 07:50:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  2 07:50:12 np0005466030 python3.9[91307]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:12 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct  2 07:50:12 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct  2 07:50:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:12.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:50:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:50:12 np0005466030 python3.9[91459]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  2 07:50:13 np0005466030 python3.9[91609]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:50:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:14.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:50:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:15 np0005466030 python3.9[91761]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:15 np0005466030 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  2 07:50:15 np0005466030 systemd[1]: tuned.service: Deactivated successfully.
Oct  2 07:50:15 np0005466030 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  2 07:50:15 np0005466030 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  2 07:50:15 np0005466030 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  2 07:50:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:16 np0005466030 python3.9[91922]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  2 07:50:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:16 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct  2 07:50:16 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct  2 07:50:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:18.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:19 np0005466030 python3.9[92074]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:19 np0005466030 python3.9[92228]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:50:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:20.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:20 np0005466030 systemd[1]: session-35.scope: Deactivated successfully.
Oct  2 07:50:20 np0005466030 systemd[1]: session-35.scope: Consumed 1min 3.652s CPU time.
Oct  2 07:50:20 np0005466030 systemd-logind[795]: Session 35 logged out. Waiting for processes to exit.
Oct  2 07:50:20 np0005466030 systemd-logind[795]: Removed session 35.
Oct  2 07:50:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:20.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:22 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct  2 07:50:22 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct  2 07:50:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:22.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:22.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:24 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Oct  2 07:50:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:24.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:24 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Oct  2 07:50:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:25 np0005466030 systemd-logind[795]: New session 36 of user zuul.
Oct  2 07:50:25 np0005466030 systemd[1]: Started Session 36 of User zuul.
Oct  2 07:50:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:26.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:26 np0005466030 python3.9[92540]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:26.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:27 np0005466030 python3.9[92696]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  2 07:50:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:50:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:50:28 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Oct  2 07:50:28 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Oct  2 07:50:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:28 np0005466030 python3.9[92849]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:50:29 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct  2 07:50:29 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct  2 07:50:29 np0005466030 python3.9[92933]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:50:30 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct  2 07:50:30 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct  2 07:50:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:30.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:31 np0005466030 python3.9[93086]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:50:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:32.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:50:33 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct  2 07:50:33 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct  2 07:50:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:34 np0005466030 python3.9[93289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:50:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:50:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:34.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:35 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Oct  2 07:50:35 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Oct  2 07:50:35 np0005466030 python3.9[93442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:36 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct  2 07:50:36 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct  2 07:50:36 np0005466030 python3.9[93594]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  2 07:50:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:36.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:37 np0005466030 python3.9[93744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:38 np0005466030 python3.9[93902]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:38.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:50:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:38.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:50:39 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct  2 07:50:39 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct  2 07:50:40 np0005466030 python3.9[94055]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:50:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:40.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:40.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct  2 07:50:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct  2 07:50:41 np0005466030 python3.9[94342]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 07:50:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:42.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:42 np0005466030 python3.9[94492]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:42.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:43 np0005466030 python3.9[94646]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:44 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct  2 07:50:44 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct  2 07:50:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:44.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:44.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct  2 07:50:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct  2 07:50:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:45 np0005466030 python3.9[94799]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:50:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:46.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:46.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:47 np0005466030 python3.9[94952]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:50:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:50:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:48.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:50:48 np0005466030 python3.9[95106]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  2 07:50:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:50:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:50:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct  2 07:50:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct  2 07:50:49 np0005466030 systemd[1]: session-36.scope: Deactivated successfully.
Oct  2 07:50:49 np0005466030 systemd[1]: session-36.scope: Consumed 17.674s CPU time.
Oct  2 07:50:49 np0005466030 systemd-logind[795]: Session 36 logged out. Waiting for processes to exit.
Oct  2 07:50:49 np0005466030 systemd-logind[795]: Removed session 36.
Oct  2 07:50:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:50.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:52.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:52.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:54 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct  2 07:50:54 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct  2 07:50:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:54.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:50:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:54.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:50:55 np0005466030 systemd-logind[795]: New session 37 of user zuul.
Oct  2 07:50:55 np0005466030 systemd[1]: Started Session 37 of User zuul.
Oct  2 07:50:55 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Oct  2 07:50:55 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Oct  2 07:50:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:50:56 np0005466030 python3.9[95285]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:50:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:56.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:56.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:57 np0005466030 python3.9[95439]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:50:58 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct  2 07:50:58 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct  2 07:50:58 np0005466030 python3.9[95632]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:50:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:50:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:58.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:50:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:50:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:50:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:58.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:50:58 np0005466030 systemd[1]: session-37.scope: Deactivated successfully.
Oct  2 07:50:58 np0005466030 systemd[1]: session-37.scope: Consumed 2.164s CPU time.
Oct  2 07:50:58 np0005466030 systemd-logind[795]: Session 37 logged out. Waiting for processes to exit.
Oct  2 07:50:58 np0005466030 systemd-logind[795]: Removed session 37.
Oct  2 07:51:00 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct  2 07:51:00 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct  2 07:51:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:00.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:00.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:02 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct  2 07:51:02 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct  2 07:51:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:51:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:02.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:51:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:51:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:51:04 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct  2 07:51:04 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct  2 07:51:04 np0005466030 systemd-logind[795]: New session 38 of user zuul.
Oct  2 07:51:04 np0005466030 systemd[1]: Started Session 38 of User zuul.
Oct  2 07:51:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:04.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:04.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:05 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  2 07:51:05 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  2 07:51:05 np0005466030 python3.9[95812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:06 np0005466030 python3.9[95966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:06.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:06.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:07 np0005466030 python3.9[96122]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:51:08 np0005466030 python3.9[96206]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:51:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:08.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:10 np0005466030 python3.9[96359]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:51:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:51:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:10.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:51:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:10.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct  2 07:51:11 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct  2 07:51:11 np0005466030 python3.9[96554]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:12 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Oct  2 07:51:12 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Oct  2 07:51:12 np0005466030 python3.9[96706]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:51:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:12.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:12.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:13 np0005466030 python3.9[96870]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:13 np0005466030 python3.9[96948]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:14 np0005466030 python3.9[97100]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:14.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:15 np0005466030 python3.9[97178]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:15 np0005466030 python3.9[97330]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:16 np0005466030 python3.9[97482]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:16.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:17 np0005466030 python3.9[97634]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:17 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct  2 07:51:17 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct  2 07:51:17 np0005466030 python3.9[97786]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:51:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:18.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:18.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:18 np0005466030 python3.9[97938]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:51:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:20.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:20.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:21 np0005466030 python3.9[98091]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:22 np0005466030 python3.9[98245]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:51:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:51:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:22.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:51:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:22.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:22 np0005466030 python3.9[98397]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:51:23 np0005466030 python3.9[98549]: ansible-service_facts Invoked
Oct  2 07:51:23 np0005466030 network[98566]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:51:23 np0005466030 network[98567]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:51:23 np0005466030 network[98568]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:51:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:24.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:24.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:25 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Oct  2 07:51:25 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Oct  2 07:51:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:26.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:28.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:28.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:28 np0005466030 python3.9[99024]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:51:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:30.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:31 np0005466030 python3.9[99177]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  2 07:51:32 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct  2 07:51:32 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct  2 07:51:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:32.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:33 np0005466030 python3.9[99329]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:33 np0005466030 python3.9[99407]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:34 np0005466030 python3.9[99602]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:34.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:34 np0005466030 python3.9[99755]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:36 np0005466030 python3.9[99919]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:36 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct  2 07:51:36 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct  2 07:51:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:51:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:51:37 np0005466030 python3.9[100071]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:51:38 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct  2 07:51:38 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct  2 07:51:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:39 np0005466030 python3.9[100155]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:51:40 np0005466030 systemd[1]: session-38.scope: Deactivated successfully.
Oct  2 07:51:40 np0005466030 systemd[1]: session-38.scope: Consumed 22.503s CPU time.
Oct  2 07:51:40 np0005466030 systemd-logind[795]: Session 38 logged out. Waiting for processes to exit.
Oct  2 07:51:40 np0005466030 systemd-logind[795]: Removed session 38.
Oct  2 07:51:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:40.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct  2 07:51:41 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct  2 07:51:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:42.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:43 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  2 07:51:43 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  2 07:51:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:51:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:45 np0005466030 systemd-logind[795]: New session 39 of user zuul.
Oct  2 07:51:45 np0005466030 systemd[1]: Started Session 39 of User zuul.
Oct  2 07:51:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct  2 07:51:45 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct  2 07:51:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:46 np0005466030 python3.9[100388]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:46 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct  2 07:51:46 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct  2 07:51:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:46.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:51:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:46.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:51:47 np0005466030 python3.9[100540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:47 np0005466030 python3.9[100618]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:48 np0005466030 systemd[1]: session-39.scope: Deactivated successfully.
Oct  2 07:51:48 np0005466030 systemd[1]: session-39.scope: Consumed 1.447s CPU time.
Oct  2 07:51:48 np0005466030 systemd-logind[795]: Session 39 logged out. Waiting for processes to exit.
Oct  2 07:51:48 np0005466030 systemd-logind[795]: Removed session 39.
Oct  2 07:51:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:48.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:48.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct  2 07:51:49 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct  2 07:51:50 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct  2 07:51:50 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct  2 07:51:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:50.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:50.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:51 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  2 07:51:51 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  2 07:51:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:51:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:52.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:51:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:51:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:52.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:51:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Oct  2 07:51:53 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Oct  2 07:51:53 np0005466030 systemd-logind[795]: New session 40 of user zuul.
Oct  2 07:51:53 np0005466030 systemd[1]: Started Session 40 of User zuul.
Oct  2 07:51:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:54.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:54 np0005466030 python3.9[100796]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:51:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:54.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:51:55 np0005466030 python3.9[100952]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct  2 07:51:56 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct  2 07:51:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:56.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:56 np0005466030 python3.9[101127]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:56.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:57 np0005466030 python3.9[101205]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.j843oalw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:57 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct  2 07:51:57 np0005466030 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct  2 07:51:58 np0005466030 python3.9[101357]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:51:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:51:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:58.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:51:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:51:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:51:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:51:58 np0005466030 python3.9[101435]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.yqpyhefk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:51:59 np0005466030 python3.9[101587]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:00 np0005466030 python3.9[101739]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:00.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:00 np0005466030 python3.9[101817]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:01 np0005466030 python3.9[101969]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:01 np0005466030 python3.9[102047]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:52:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:02.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:02 np0005466030 python3.9[102199]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:03 np0005466030 python3.9[102351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:03 np0005466030 python3.9[102429]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:04.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:04 np0005466030 python3.9[102581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:05 np0005466030 python3.9[102659]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:06 np0005466030 python3.9[102811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:52:06 np0005466030 systemd[1]: Reloading.
Oct  2 07:52:06 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:52:06 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:52:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:52:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:06.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:52:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:06.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:07 np0005466030 python3.9[103001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:07 np0005466030 python3.9[103079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:08 np0005466030 python3.9[103231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:08.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:08 np0005466030 python3.9[103309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:08.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:09 np0005466030 python3.9[103461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:52:09 np0005466030 systemd[1]: Reloading.
Oct  2 07:52:09 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:52:09 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:52:10 np0005466030 systemd[1]: Starting Create netns directory...
Oct  2 07:52:10 np0005466030 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:52:10 np0005466030 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:52:10 np0005466030 systemd[1]: Finished Create netns directory.
Oct  2 07:52:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:10.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:11 np0005466030 python3.9[103652]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:52:11 np0005466030 network[103669]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:52:11 np0005466030 network[103670]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:52:11 np0005466030 network[103671]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:52:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:12.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:12.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:14.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:16.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:16.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:18 np0005466030 python3.9[103936]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:18.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:18 np0005466030 python3.9[104014]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:18.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:19 np0005466030 python3.9[104166]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:20 np0005466030 python3.9[104318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:20.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:20 np0005466030 python3.9[104396]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:20.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:21 np0005466030 python3.9[104548]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  2 07:52:21 np0005466030 systemd[1]: Starting Time & Date Service...
Oct  2 07:52:22 np0005466030 systemd[1]: Started Time & Date Service.
Oct  2 07:52:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:22.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:22 np0005466030 python3.9[104704]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:23 np0005466030 python3.9[104856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:24 np0005466030 python3.9[104934]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.094693) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944094760, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2588, "num_deletes": 251, "total_data_size": 5243595, "memory_usage": 5317840, "flush_reason": "Manual Compaction"}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944140308, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3433301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7198, "largest_seqno": 9781, "table_properties": {"data_size": 3423416, "index_size": 5803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25468, "raw_average_key_size": 21, "raw_value_size": 3401260, "raw_average_value_size": 2853, "num_data_blocks": 259, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405766, "oldest_key_time": 1759405766, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 45714 microseconds, and 7685 cpu microseconds.
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.140407) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3433301 bytes OK
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.140446) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.143621) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.143642) EVENT_LOG_v1 {"time_micros": 1759405944143636, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.143667) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5231612, prev total WAL file size 5232248, number of live WAL files 2.
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.145200) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3352KB)], [15(7790KB)]
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944145269, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11410662, "oldest_snapshot_seqno": -1}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3819 keys, 9730545 bytes, temperature: kUnknown
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944235455, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9730545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9699212, "index_size": 20663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92031, "raw_average_key_size": 24, "raw_value_size": 9624526, "raw_average_value_size": 2520, "num_data_blocks": 903, "num_entries": 3819, "num_filter_entries": 3819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.235727) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9730545 bytes
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.241689) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.5 rd, 107.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 4340, records dropped: 521 output_compression: NoCompression
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.241728) EVENT_LOG_v1 {"time_micros": 1759405944241712, "job": 6, "event": "compaction_finished", "compaction_time_micros": 90231, "compaction_time_cpu_micros": 22485, "output_level": 6, "num_output_files": 1, "total_output_size": 9730545, "num_input_records": 4340, "num_output_records": 3819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944242491, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944243925, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.145114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:52:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:24.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:24 np0005466030 python3.9[105086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:24.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:25 np0005466030 python3.9[105164]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ignqgc_8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:26 np0005466030 python3.9[105316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:26 np0005466030 python3.9[105394]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:52:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:26.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:52:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:26.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:27 np0005466030 python3.9[105546]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:28 np0005466030 python3[105699]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:52:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:28.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:29 np0005466030 python3.9[105851]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:29 np0005466030 python3.9[105929]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:30.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:30 np0005466030 python3.9[106081]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:30.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:31 np0005466030 python3.9[106159]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:32 np0005466030 python3.9[106311]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:32 np0005466030 python3.9[106389]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:32.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:32.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:33 np0005466030 python3.9[106541]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:33 np0005466030 python3.9[106619]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:34.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:34 np0005466030 python3.9[106771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:35 np0005466030 python3.9[106849]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:35 np0005466030 python3.9[107001]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:36.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:36 np0005466030 python3.9[107156]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:37 np0005466030 python3.9[107308]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:38 np0005466030 python3.9[107460]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:39 np0005466030 python3.9[107612]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:52:39 np0005466030 python3.9[107764]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  2 07:52:40 np0005466030 systemd[1]: session-40.scope: Deactivated successfully.
Oct  2 07:52:40 np0005466030 systemd[1]: session-40.scope: Consumed 27.867s CPU time.
Oct  2 07:52:40 np0005466030 systemd-logind[795]: Session 40 logged out. Waiting for processes to exit.
Oct  2 07:52:40 np0005466030 systemd-logind[795]: Removed session 40.
Oct  2 07:52:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:42.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:42.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:44.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:45 np0005466030 systemd-logind[795]: New session 41 of user zuul.
Oct  2 07:52:45 np0005466030 systemd[1]: Started Session 41 of User zuul.
Oct  2 07:52:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:46.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:46 np0005466030 python3.9[108075]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  2 07:52:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:52:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:52:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:47 np0005466030 python3.9[108227]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:52:48 np0005466030 python3.9[108381]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  2 07:52:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:48.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:48.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:49 np0005466030 python3.9[108533]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.7z2ld17y follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:52:49 np0005466030 python3.9[108658]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.7z2ld17y mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405968.7988791-108-28436385808654/.source.7z2ld17y _original_basename=.04ehrds_ follow=False checksum=d30a0a751a32c4c6fb89a77f8bd3d66e091396ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:50.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:52:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:52:51 np0005466030 python3.9[108810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:52:52 np0005466030 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 07:52:52 np0005466030 python3.9[108962]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfikJfuUE7Xs2lF9Qh9l0WUdl+Tct7ff0gJQZVpPwLHlAwFnY1lIlqF2IQ3J7LtFcsjYF5RcofKcj+ARkMTobXFoygI/H3Yl5EGDehZbaNONLkDXT20bcYtosTZBjJTZWMJaDGUobRPnKWEbt7P8G/CVwj+LKBYxYcl65Bs0m8Ii2JZObV/41E/44oNBbTT6VnLqrH1BjRfNgToFyoYZToIU6gJw+lDGgt/afrHnDeR8fo6fgHkoHZKHxctrFraqhPOEX+SW/RD5ra4/WxZTBDAcOelVyZhpZ0V6HTQuS0IuD/sy9RD9W59TrF0oFH8kP6H1F3EbhrMfM/wkGJqxcBEMPIlGjUgoOCOY4tgCsAuyKcqelTUJIoL5uTuk06fd+1+B0t8j//vY7eWDCGwHAYrOCbL954GsjqhEOd/SL8vW6cT4Eh+DaWzKpvnl+bEN+G7wkI9etJ4B8NugtDyE25Ikfn9nsBLIcPcuepnlcBQkTN4sC+w0I1AEm3Uo8MFOM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxo/cGygmGP55Hjd3RI5yFpLqrtrtdd2PGw/FbMnxJJ#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbUwjRfNWPOWmPM9kXykw3bNz7sYSt7DYbalJhzh+E3yGMACUO+HxFuSQ4lHBBXquZltdOcmR202cRP+4s05oI=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+zJSXp4XBwGccVvswqz0/27MxV0mWhHJ9EKngmPOQ2Et2f+QArNFJsEaUEJankaYSrISVt8m0QscyZhZUgrxp07g0OV9pVQ2pkqF/CSC7RnN96odOHOeQjRmSOj9vF8Q3EeyRZ7MS1CWH6TT+jYOD77TFol6cQhi7o5bzgAdL6yB/ili/PG3bBxtbYtNwSqCSpiGaN8z8j/REszkW2GM6wvDGXk9NgNfBZT4goP4O3qz/wVeMM/OQFGQa/34tMNX3QEE/XOdAUIRXXLw0vmVj7oRDzGVMc12TDalGOqphS+LkUS4PB+ns/IaplTUzc8zlwhycQQPxnzEcm+z3QP8Bo+iBGw+aKpc5UTMMtZocXrjHCv0Q6irXug6N6b7aaANiHMmveZua/Gjp6Ef//Q/+thKtkvcvvhUDZknHLDrHGT5QbVQYjN23MyFdWCu6MgpBw8NNyeI5sO605lOrxk2oXwX19ah7Qt7iAU7KRijLzQBjnMjNb6bcSOCFXVzpl0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxmfzZIbNhcux/tJpdvzaDW/iX/PRMqNcEGpeyKOTEV#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANBfiBul8lZFa5T9kjEYk719DZo4CtW2bTDn+SPcbu/2U71Ms3Qc1tvqiM9B/ciT9t/uzxk25klpGuFqieJFkk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb8D90laelhslbtmfz72Mp6Q7iCMu+KiPRuBFH59nBtb1LmjrIFjvU1qZnJ+wipHW+bRcdDzNWNM8KJ4IImBqFxbrg17RhHeunE84nnR8leX3OYiMZumpygvXYCykppXcKbe6pfxYUtyTc8Tz3bNoayi7uGoKgN/iaUeADLuyJUDDVyusj2q7uIj7gZ6PbtorR5cUUn0wBZTo3Jx84NmdiJr/xDGrtfawsV6ATz+Rpx3vzz4EE4dq4wN3eTUJiPCpc4jbTvHpp0GdJTK1BkZ4IANgw3a+loOO2MHq2JgMRjKJrH7sqrw7s9XgzHSh/ufOmEKAtgw75tWExEcy/05QGGbR2jnIKde4vVIS5JheT1z4gYASjKEEidjisDxig5nigPddxe3nSxKRQczKXPV+KUOB14AljRbnyqgbw4Dv9wtnkFL/QLMXFA0/NaOAZxhI+fOoAcg+No2ZsB95IgQ49ay/LN011x9o1vfwVPfReOtkjpVxQB8oCXhA53BfrG3M=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAtzqd+HKKUdtdjsFK/O61rbaIfH2/ANnbsFBvd1WLXA#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOyw0g2rIQxTWmEkqBGUUvYwuDopCg/ppyBGUh5LatbQKlwO7AkEzPUhEeFZv2/qzobLbOH4kVCTAQVjiQm//WM=#012 create=True mode=0644 path=/tmp/ansible.7z2ld17y state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:52:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:52:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:53.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:53 np0005466030 python3.9[109116]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.7z2ld17y' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:52:54 np0005466030 python3.9[109270]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.7z2ld17y state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:52:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:54.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:54 np0005466030 systemd[1]: session-41.scope: Deactivated successfully.
Oct  2 07:52:54 np0005466030 systemd[1]: session-41.scope: Consumed 4.853s CPU time.
Oct  2 07:52:54 np0005466030 systemd-logind[795]: Session 41 logged out. Waiting for processes to exit.
Oct  2 07:52:54 np0005466030 systemd-logind[795]: Removed session 41.
Oct  2 07:52:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:55.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:52:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:52:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:56.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:52:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:52:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:52:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:00 np0005466030 systemd-logind[795]: New session 42 of user zuul.
Oct  2 07:53:00 np0005466030 systemd[1]: Started Session 42 of User zuul.
Oct  2 07:53:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:00.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:01.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:01 np0005466030 python3.9[109498]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:53:02 np0005466030 python3.9[109654]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 07:53:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:03 np0005466030 python3.9[109808]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 07:53:04 np0005466030 python3.9[109961]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:53:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:04.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:05 np0005466030 python3.9[110114]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:06 np0005466030 python3.9[110266]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:06 np0005466030 systemd[1]: session-42.scope: Deactivated successfully.
Oct  2 07:53:06 np0005466030 systemd[1]: session-42.scope: Consumed 3.676s CPU time.
Oct  2 07:53:06 np0005466030 systemd-logind[795]: Session 42 logged out. Waiting for processes to exit.
Oct  2 07:53:06 np0005466030 systemd-logind[795]: Removed session 42.
Oct  2 07:53:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:06.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:09.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:12 np0005466030 systemd-logind[795]: New session 43 of user zuul.
Oct  2 07:53:12 np0005466030 systemd[1]: Started Session 43 of User zuul.
Oct  2 07:53:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:12.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:13 np0005466030 python3.9[110444]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:53:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:14 np0005466030 python3.9[110600]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:53:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:14 np0005466030 python3.9[110684]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  2 07:53:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:17 np0005466030 python3.9[110835]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:53:18 np0005466030 python3.9[110986]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:53:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:19 np0005466030 python3.9[111136]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:19 np0005466030 python3.9[111286]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:53:20 np0005466030 systemd-logind[795]: Session 43 logged out. Waiting for processes to exit.
Oct  2 07:53:20 np0005466030 systemd[1]: session-43.scope: Deactivated successfully.
Oct  2 07:53:20 np0005466030 systemd[1]: session-43.scope: Consumed 5.639s CPU time.
Oct  2 07:53:20 np0005466030 systemd-logind[795]: Removed session 43.
Oct  2 07:53:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.511505) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004511533, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 809, "num_deletes": 250, "total_data_size": 1616341, "memory_usage": 1639912, "flush_reason": "Manual Compaction"}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004516817, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 701185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9786, "largest_seqno": 10590, "table_properties": {"data_size": 697888, "index_size": 1141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8288, "raw_average_key_size": 19, "raw_value_size": 691034, "raw_average_value_size": 1653, "num_data_blocks": 50, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405944, "oldest_key_time": 1759405944, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5345 microseconds, and 2554 cpu microseconds.
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.516850) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 701185 bytes OK
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.516867) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518173) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518190) EVENT_LOG_v1 {"time_micros": 1759406004518185, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1612132, prev total WAL file size 1612132, number of live WAL files 2.
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.519050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(684KB)], [18(9502KB)]
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004519121, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10431730, "oldest_snapshot_seqno": -1}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3746 keys, 7707753 bytes, temperature: kUnknown
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004549414, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7707753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7679948, "index_size": 17327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90955, "raw_average_key_size": 24, "raw_value_size": 7609481, "raw_average_value_size": 2031, "num_data_blocks": 757, "num_entries": 3746, "num_filter_entries": 3746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.549650) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7707753 bytes
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.550675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 343.6 rd, 253.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.3 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(25.9) write-amplify(11.0) OK, records in: 4237, records dropped: 491 output_compression: NoCompression
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.550696) EVENT_LOG_v1 {"time_micros": 1759406004550685, "job": 8, "event": "compaction_finished", "compaction_time_micros": 30360, "compaction_time_cpu_micros": 17985, "output_level": 6, "num_output_files": 1, "total_output_size": 7707753, "num_input_records": 4237, "num_output_records": 3746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004550908, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004552251, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:53:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:25 np0005466030 systemd-logind[795]: New session 44 of user zuul.
Oct  2 07:53:25 np0005466030 systemd[1]: Started Session 44 of User zuul.
Oct  2 07:53:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:26 np0005466030 python3.9[111464]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:53:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:27 np0005466030 python3.9[111620]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:28 np0005466030 python3.9[111772]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:28.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:29.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:29 np0005466030 python3.9[111924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:30 np0005466030 python3.9[112047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406008.5954194-158-106706083065266/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=5701859e6f99bebb728ba839c69b6b2a9ec878f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:30 np0005466030 python3.9[112199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:31 np0005466030 python3.9[112322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406010.3883438-158-133724666753749/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=3ecf5e4f77066a77590b5118d192b2b931dec8bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:31 np0005466030 python3.9[112474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:32 np0005466030 python3.9[112597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406011.485713-158-242066637855815/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=4aa0139080ac0ab9e64ae577109c46bec3764980 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:33.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:33 np0005466030 python3.9[112749]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:33 np0005466030 python3.9[112901]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:34 np0005466030 python3.9[113053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:34 np0005466030 python3.9[113176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406013.9352722-335-249154459684089/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=66f0ccf261dd8839c4c8f774a0a21a880477d530 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:35.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:35 np0005466030 python3.9[113328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:36 np0005466030 python3.9[113451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406015.1311266-335-101122478434008/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:36 np0005466030 python3.9[113603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:37.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:37 np0005466030 python3.9[113726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406016.3632164-335-244037381185802/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=0d941fc9451c5d9a7a910488cd06b09db9bdf3b8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:38 np0005466030 python3.9[113878]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:38 np0005466030 python3.9[114030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:39.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:39 np0005466030 python3.9[114182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:39 np0005466030 python3.9[114305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406018.7772686-511-189949252078132/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=d14b1899eaab79c298e4cf2f0edf593be3adc735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:40 np0005466030 python3.9[114457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:40 np0005466030 python3.9[114580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406019.9655387-511-172897595749014/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:41 np0005466030 python3.9[114732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:42 np0005466030 python3.9[114855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406021.0884225-511-206665211907291/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=3ac8674b28901153dbb19b53e5670329e2a5dc76 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:43.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:43 np0005466030 python3.9[115007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:43 np0005466030 python3.9[115159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:44 np0005466030 python3.9[115282]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406023.3546689-708-31863176175071/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:44.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:45 np0005466030 python3.9[115434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:45 np0005466030 python3.9[115586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:46 np0005466030 python3.9[115709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406025.3305156-780-80847351519556/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:46 np0005466030 python3.9[115861]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:47.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:47 np0005466030 python3.9[116013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:48 np0005466030 python3.9[116136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406027.1147988-851-91023928664388/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:48.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:48 np0005466030 python3.9[116288]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:49.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:49 np0005466030 python3.9[116440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:49 np0005466030 python3.9[116563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406028.9254313-921-86414246127607/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:50 np0005466030 python3.9[116715]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:50.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:51 np0005466030 python3.9[116867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:51 np0005466030 python3.9[116990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406030.7528882-986-86803612188948/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:52 np0005466030 python3.9[117142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:53:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:52.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:53 np0005466030 python3.9[117294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:53:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:53:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:53:53 np0005466030 python3.9[117417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406032.5420167-1054-179838425653125/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:53:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:54.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:55 np0005466030 systemd-logind[795]: Session 44 logged out. Waiting for processes to exit.
Oct  2 07:53:55 np0005466030 systemd[1]: session-44.scope: Deactivated successfully.
Oct  2 07:53:55 np0005466030 systemd[1]: session-44.scope: Consumed 22.236s CPU time.
Oct  2 07:53:55 np0005466030 systemd-logind[795]: Removed session 44.
Oct  2 07:53:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:53:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:53:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:53:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:53:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:53:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:53:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:53:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:00 np0005466030 systemd-logind[795]: New session 45 of user zuul.
Oct  2 07:54:00 np0005466030 systemd[1]: Started Session 45 of User zuul.
Oct  2 07:54:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:01 np0005466030 python3.9[117729]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:02 np0005466030 python3.9[117881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:02 np0005466030 python3.9[118054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406041.464536-68-212475890197705/.source.conf _original_basename=ceph.conf follow=False checksum=bc6368cedc2ad3c8a4bd89508113374e22439583 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:54:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:54:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:03.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:03 np0005466030 python3.9[118206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:03 np0005466030 python3.9[118329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406042.8659356-68-89280977594074/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=75f34a13e5eafe465b3328865c9fc53d2eab5578 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:04 np0005466030 systemd[1]: session-45.scope: Deactivated successfully.
Oct  2 07:54:04 np0005466030 systemd[1]: session-45.scope: Consumed 2.519s CPU time.
Oct  2 07:54:04 np0005466030 systemd-logind[795]: Session 45 logged out. Waiting for processes to exit.
Oct  2 07:54:04 np0005466030 systemd-logind[795]: Removed session 45.
Oct  2 07:54:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:04.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:06.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:07.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:09.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:09 np0005466030 systemd-logind[795]: New session 46 of user zuul.
Oct  2 07:54:09 np0005466030 systemd[1]: Started Session 46 of User zuul.
Oct  2 07:54:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:11 np0005466030 python3.9[118507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:54:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:11.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:12 np0005466030 python3.9[118663]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:12.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:12 np0005466030 python3.9[118815]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:13.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:13 np0005466030 python3.9[118965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:54:14 np0005466030 python3.9[119117]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  2 07:54:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:54:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:15.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:54:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:16 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  2 07:54:16 np0005466030 python3.9[119273]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:54:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:16.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:17.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:17 np0005466030 python3.9[119357]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:54:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:18.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:19.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:19 np0005466030 python3.9[119510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:54:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:20.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:20 np0005466030 python3[119665]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  2 07:54:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:21.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:21 np0005466030 python3.9[119817]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:22 np0005466030 python3.9[119969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:22.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:23 np0005466030 python3.9[120047]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:23 np0005466030 python3.9[120199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:24 np0005466030 python3.9[120277]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4v8kc0tg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:24.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:24 np0005466030 python3.9[120429]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:25.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:25 np0005466030 python3.9[120507]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:26 np0005466030 python3.9[120659]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:26.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:26 np0005466030 python3[120812]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 07:54:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:54:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:27.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:54:27 np0005466030 python3.9[120964]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:28 np0005466030 python3.9[121089]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406067.1561906-437-59711288727108/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:28.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:29 np0005466030 python3.9[121241]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:29.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:29 np0005466030 python3.9[121366]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406068.677015-482-188917764686674/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:30 np0005466030 python3.9[121518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:30.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:30 np0005466030 python3.9[121643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406069.8822687-527-49518981086114/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:31.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:31 np0005466030 python3.9[121795]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:32 np0005466030 python3.9[121920]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406071.1252136-572-86068142432920/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:32.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:32 np0005466030 python3.9[122072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:33.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:33 np0005466030 python3.9[122197]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406072.3505304-617-260512025487499/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:34 np0005466030 python3.9[122349]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:34 np0005466030 python3.9[122501]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:35.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:35 np0005466030 python3.9[122656]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:36 np0005466030 python3.9[122808]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:37 np0005466030 python3.9[122961]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:54:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:37.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:37 np0005466030 python3.9[123115]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:38 np0005466030 python3.9[123270]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:38.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:39.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:39 np0005466030 python3.9[123420]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:54:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:40 np0005466030 python3.9[123573]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:40 np0005466030 ovs-vsctl[123574]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  2 07:54:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:41.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:41 np0005466030 python3.9[123726]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:42 np0005466030 python3.9[123881]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:54:42 np0005466030 ovs-vsctl[123882]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  2 07:54:42 np0005466030 python3.9[124032]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:54:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:42.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:43 np0005466030 python3.9[124186]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:44 np0005466030 python3.9[124338]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:44 np0005466030 python3.9[124416]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:44.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:45 np0005466030 python3.9[124568]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:45 np0005466030 python3.9[124646]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:46 np0005466030 python3.9[124798]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:54:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:54:47 np0005466030 python3.9[124950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:47.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:47 np0005466030 python3.9[125028]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:48 np0005466030 python3.9[125180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:48 np0005466030 python3.9[125258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:54:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:54:49 np0005466030 python3.9[125410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:54:49 np0005466030 systemd[1]: Reloading.
Oct  2 07:54:49 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:54:49 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:54:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:50 np0005466030 python3.9[125600]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:51 np0005466030 python3.9[125678]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:51.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:51 np0005466030 python3.9[125830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:52 np0005466030 python3.9[125908]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:54:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:52.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:54:53 np0005466030 python3.9[126060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:54:53 np0005466030 systemd[1]: Reloading.
Oct  2 07:54:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:53 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:54:53 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:54:53 np0005466030 systemd[1]: Starting Create netns directory...
Oct  2 07:54:53 np0005466030 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:54:53 np0005466030 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:54:53 np0005466030 systemd[1]: Finished Create netns directory.
Oct  2 07:54:54 np0005466030 python3.9[126254]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:54.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:55 np0005466030 python3.9[126406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:55.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:54:55 np0005466030 python3.9[126529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406094.6659286-1370-65124947093444/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:56 np0005466030 python3.9[126681]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:54:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:56.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:57.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:54:57 np0005466030 python3.9[126833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:54:57 np0005466030 python3.9[126956]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406097.0353258-1445-243384232954415/.source.json _original_basename=.7772p28_ follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:58 np0005466030 python3.9[127108]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:54:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:54:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:58.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:54:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:54:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:54:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:59.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:00.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:01 np0005466030 python3.9[127535]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  2 07:55:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:02 np0005466030 python3.9[127687]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:55:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:55:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:02.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:55:03 np0005466030 python3.9[127958]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:55:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:03.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:55:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:55:04 np0005466030 python3[128266]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:55:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:08.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:09.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:10 np0005466030 podman[128280]: 2025-10-02 11:55:10.031595684 +0000 UTC m=+5.195113115 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:55:10 np0005466030 podman[128394]: 2025-10-02 11:55:10.179064605 +0000 UTC m=+0.051927395 container create 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 07:55:10 np0005466030 podman[128394]: 2025-10-02 11:55:10.152669549 +0000 UTC m=+0.025532359 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:55:10 np0005466030 python3[128266]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct  2 07:55:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:10 np0005466030 python3.9[128584]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:55:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:11 np0005466030 python3.9[128738]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:12 np0005466030 python3.9[128814]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:55:12 np0005466030 python3.9[128965]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406112.2115078-1709-60905132632103/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:13 np0005466030 python3.9[129041]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:55:13 np0005466030 systemd[1]: Reloading.
Oct  2 07:55:13 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:13 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:14 np0005466030 python3.9[129201]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:55:14 np0005466030 systemd[1]: Reloading.
Oct  2 07:55:14 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:14 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:55:14 np0005466030 systemd[1]: Starting ovn_controller container...
Oct  2 07:55:14 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:55:14 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120e368ddeee0800e02132928f2722fa4685cb9275ecf36047afb1aaf51f94a7/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  2 07:55:14 np0005466030 systemd[1]: Started /usr/bin/podman healthcheck run 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409.
Oct  2 07:55:14 np0005466030 podman[129241]: 2025-10-02 11:55:14.619844076 +0000 UTC m=+0.109546956 container init 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:55:14 np0005466030 ovn_controller[129257]: + sudo -E kolla_set_configs
Oct  2 07:55:14 np0005466030 podman[129241]: 2025-10-02 11:55:14.63977929 +0000 UTC m=+0.129482170 container start 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 07:55:14 np0005466030 edpm-start-podman-container[129241]: ovn_controller
Oct  2 07:55:14 np0005466030 systemd[1]: Created slice User Slice of UID 0.
Oct  2 07:55:14 np0005466030 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 07:55:14 np0005466030 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 07:55:14 np0005466030 edpm-start-podman-container[129240]: Creating additional drop-in dependency for "ovn_controller" (0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409)
Oct  2 07:55:14 np0005466030 podman[129264]: 2025-10-02 11:55:14.718332916 +0000 UTC m=+0.067631036 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 07:55:14 np0005466030 systemd[1]: Starting User Manager for UID 0...
Oct  2 07:55:14 np0005466030 systemd[1]: 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409-12456c00632a49e6.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 07:55:14 np0005466030 systemd[1]: 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409-12456c00632a49e6.service: Failed with result 'exit-code'.
Oct  2 07:55:14 np0005466030 systemd[1]: Reloading.
Oct  2 07:55:14 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:14 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:14.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:15 np0005466030 systemd[1]: Started ovn_controller container.
Oct  2 07:55:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:15 np0005466030 systemd[129300]: Queued start job for default target Main User Target.
Oct  2 07:55:15 np0005466030 systemd[129300]: Created slice User Application Slice.
Oct  2 07:55:15 np0005466030 systemd[129300]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 07:55:15 np0005466030 systemd[129300]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 07:55:15 np0005466030 systemd[129300]: Reached target Paths.
Oct  2 07:55:15 np0005466030 systemd[129300]: Reached target Timers.
Oct  2 07:55:15 np0005466030 systemd[129300]: Starting D-Bus User Message Bus Socket...
Oct  2 07:55:15 np0005466030 systemd[129300]: Starting Create User's Volatile Files and Directories...
Oct  2 07:55:15 np0005466030 systemd[129300]: Listening on D-Bus User Message Bus Socket.
Oct  2 07:55:15 np0005466030 systemd[129300]: Reached target Sockets.
Oct  2 07:55:15 np0005466030 systemd[129300]: Finished Create User's Volatile Files and Directories.
Oct  2 07:55:15 np0005466030 systemd[129300]: Reached target Basic System.
Oct  2 07:55:15 np0005466030 systemd[129300]: Reached target Main User Target.
Oct  2 07:55:15 np0005466030 systemd[129300]: Startup finished in 127ms.
Oct  2 07:55:15 np0005466030 systemd[1]: Started User Manager for UID 0.
Oct  2 07:55:15 np0005466030 systemd[1]: Started Session c1 of User root.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: INFO:__main__:Validating config file
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: INFO:__main__:Writing out command to execute
Oct  2 07:55:15 np0005466030 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: ++ cat /run_command
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + ARGS=
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + sudo kolla_copy_cacerts
Oct  2 07:55:15 np0005466030 systemd[1]: Started Session c2 of User root.
Oct  2 07:55:15 np0005466030 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + [[ ! -n '' ]]
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + . kolla_extend_start
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + umask 0022
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.4635] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.4642] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.4651] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.4657] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.4663] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 07:55:15 np0005466030 kernel: br-int: entered promiscuous mode
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:15Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.4868] manager: (ovn-bfdd72-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  2 07:55:15 np0005466030 systemd-udevd[129392]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:55:15 np0005466030 kernel: genev_sys_6081: entered promiscuous mode
Oct  2 07:55:15 np0005466030 systemd-udevd[129393]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.5067] device (genev_sys_6081): carrier: link connected
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.5070] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct  2 07:55:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:15 np0005466030 NetworkManager[44960]: <info>  [1759406115.7843] manager: (ovn-b95886-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct  2 07:55:16 np0005466030 NetworkManager[44960]: <info>  [1759406116.0905] manager: (ovn-17f118-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct  2 07:55:16 np0005466030 python3.9[129524]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:55:16 np0005466030 ovs-vsctl[129525]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  2 07:55:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:55:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:16.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:55:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:17 np0005466030 python3.9[129677]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:55:17 np0005466030 ovs-vsctl[129679]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  2 07:55:18 np0005466030 python3.9[129832]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:55:18 np0005466030 ovs-vsctl[129833]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  2 07:55:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:18.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:18 np0005466030 systemd[1]: session-46.scope: Deactivated successfully.
Oct  2 07:55:18 np0005466030 systemd[1]: session-46.scope: Consumed 55.544s CPU time.
Oct  2 07:55:18 np0005466030 systemd-logind[795]: Session 46 logged out. Waiting for processes to exit.
Oct  2 07:55:18 np0005466030 systemd-logind[795]: Removed session 46.
Oct  2 07:55:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:22.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:23.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:24 np0005466030 systemd-logind[795]: New session 48 of user zuul.
Oct  2 07:55:24 np0005466030 systemd[1]: Started Session 48 of User zuul.
Oct  2 07:55:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:25 np0005466030 python3.9[130011]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:55:25 np0005466030 systemd[1]: Stopping User Manager for UID 0...
Oct  2 07:55:25 np0005466030 systemd[129300]: Activating special unit Exit the Session...
Oct  2 07:55:25 np0005466030 systemd[129300]: Stopped target Main User Target.
Oct  2 07:55:25 np0005466030 systemd[129300]: Stopped target Basic System.
Oct  2 07:55:25 np0005466030 systemd[129300]: Stopped target Paths.
Oct  2 07:55:25 np0005466030 systemd[129300]: Stopped target Sockets.
Oct  2 07:55:25 np0005466030 systemd[129300]: Stopped target Timers.
Oct  2 07:55:25 np0005466030 systemd[129300]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 07:55:25 np0005466030 systemd[129300]: Closed D-Bus User Message Bus Socket.
Oct  2 07:55:25 np0005466030 systemd[129300]: Stopped Create User's Volatile Files and Directories.
Oct  2 07:55:25 np0005466030 systemd[129300]: Removed slice User Application Slice.
Oct  2 07:55:25 np0005466030 systemd[129300]: Reached target Shutdown.
Oct  2 07:55:25 np0005466030 systemd[129300]: Finished Exit the Session.
Oct  2 07:55:25 np0005466030 systemd[129300]: Reached target Exit the Session.
Oct  2 07:55:25 np0005466030 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 07:55:25 np0005466030 systemd[1]: Stopped User Manager for UID 0.
Oct  2 07:55:25 np0005466030 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 07:55:25 np0005466030 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 07:55:25 np0005466030 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 07:55:25 np0005466030 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 07:55:25 np0005466030 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 07:55:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:26 np0005466030 python3.9[130170]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:26.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:27 np0005466030 python3.9[130322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:27 np0005466030 python3.9[130474]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:28 np0005466030 python3.9[130626]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:29 np0005466030 python3.9[130778]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:29.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:29 np0005466030 python3.9[130928]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:55:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:30 np0005466030 python3.9[131080]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  2 07:55:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:32 np0005466030 python3.9[131230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:32 np0005466030 python3.9[131351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406131.5344315-224-246455043174162/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:33.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:34 np0005466030 python3.9[131502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:34.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:35.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:35 np0005466030 python3.9[131623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406134.3913963-269-191724804268438/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:36 np0005466030 python3.9[131775]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:55:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:36.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:37 np0005466030 python3.9[131859]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:55:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:38.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:39 np0005466030 python3.9[132012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:55:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:55:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5948 writes, 25K keys, 5948 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5948 writes, 931 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5948 writes, 25K keys, 5948 commit groups, 1.0 writes per commit group, ingest: 19.07 MB, 0.03 MB/s#012Interval WAL: 5948 writes, 931 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  2 07:55:40 np0005466030 python3.9[132166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:40 np0005466030 python3.9[132287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406139.9633777-380-5234580160264/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:40.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:41 np0005466030 python3.9[132437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:41 np0005466030 python3.9[132558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406141.0053596-380-90406480983723/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:42.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:43 np0005466030 python3.9[132708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:44 np0005466030 python3.9[132829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406143.3533409-512-188363010067761/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:44 np0005466030 python3.9[132979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:44.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:45 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:45Z|00025|memory|INFO|16384 kB peak resident set size after 29.8 seconds
Oct  2 07:55:45 np0005466030 ovn_controller[129257]: 2025-10-02T11:55:45Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct  2 07:55:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:55:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:45.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:55:45 np0005466030 podman[133074]: 2025-10-02 11:55:45.293129494 +0000 UTC m=+0.122303215 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 07:55:45 np0005466030 python3.9[133115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406144.4383025-512-68891070573365/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:46 np0005466030 python3.9[133278]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:55:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:46 np0005466030 python3.9[133432]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:47 np0005466030 python3.9[133584]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:48 np0005466030 python3.9[133662]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:48 np0005466030 python3.9[133814]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:49 np0005466030 python3.9[133892]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:49 np0005466030 python3.9[134044]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:50 np0005466030 python3.9[134196]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:50.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:51 np0005466030 python3.9[134274]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:51.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:51 np0005466030 python3.9[134426]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:52 np0005466030 python3.9[134504]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:52 np0005466030 python3.9[134656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:55:52 np0005466030 systemd[1]: Reloading.
Oct  2 07:55:53 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:53 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:53 np0005466030 python3.9[134845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:54 np0005466030 python3.9[134923]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:54.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:55 np0005466030 python3.9[135075]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 07:55:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 07:55:55 np0005466030 python3.9[135153]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:55:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:55:56 np0005466030 python3.9[135305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:55:56 np0005466030 systemd[1]: Reloading.
Oct  2 07:55:56 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:55:56 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:55:56 np0005466030 systemd[1]: Starting Create netns directory...
Oct  2 07:55:56 np0005466030 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 07:55:56 np0005466030 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 07:55:56 np0005466030 systemd[1]: Finished Create netns directory.
Oct  2 07:55:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:57 np0005466030 python3.9[135498]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:58 np0005466030 python3.9[135650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:55:58 np0005466030 python3.9[135773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406157.8083937-965-116368178836152/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:55:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:55:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:58.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:55:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:55:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:55:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:55:59 np0005466030 python3.9[135925]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:56:00 np0005466030 python3.9[136077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:56:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:00 np0005466030 python3.9[136200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406159.9773817-1040-87132063875980/.source.json _original_basename=.2tuepay3 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:01 np0005466030 python3.9[136352]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:03.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:04 np0005466030 python3.9[136779]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  2 07:56:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:04.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:05 np0005466030 python3.9[136931]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 07:56:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:05.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:06 np0005466030 python3.9[137083]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 07:56:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:07.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:07 np0005466030 python3[137262]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 07:56:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:08.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:09.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 07:56:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2135 writes, 12K keys, 2135 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2134 writes, 2134 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2135 writes, 12K keys, 2135 commit groups, 1.0 writes per commit group, ingest: 23.51 MB, 0.04 MB/s#012Interval WAL: 2134 writes, 2134 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    100.6      0.11              0.03         4    0.029       0      0       0.0       0.0#012  L6      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    163.9    139.6      0.17              0.06         3    0.058     11K   1268       0.0       0.0#012 Sum      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     98.6    124.1      0.29              0.08         7    0.041     11K   1268       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     99.3    125.0      0.29              0.08         6    0.048     11K   1268       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    163.9    139.6      0.17              0.06         3    0.058     11K   1268       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    102.4      0.11              0.03         3    0.038       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 308.00 MB usage: 984.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(49,851.05 KB,0.269838%) FilterBlock(7,41.42 KB,0.0131335%) IndexBlock(7,91.70 KB,0.0290759%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 07:56:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:10.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:11.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:56:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:12.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:56:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:13.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:14.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.479861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176479915, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1757, "num_deletes": 251, "total_data_size": 4440893, "memory_usage": 4482128, "flush_reason": "Manual Compaction"}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176501268, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2913291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10595, "largest_seqno": 12347, "table_properties": {"data_size": 2905944, "index_size": 4418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14364, "raw_average_key_size": 19, "raw_value_size": 2891373, "raw_average_value_size": 3901, "num_data_blocks": 199, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406006, "oldest_key_time": 1759406006, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 21466 microseconds, and 16132 cpu microseconds.
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.501334) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2913291 bytes OK
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.501357) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.502725) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.502739) EVENT_LOG_v1 {"time_micros": 1759406176502734, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.502760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4432939, prev total WAL file size 4432939, number of live WAL files 2.
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.503716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2845KB)], [21(7527KB)]
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176503748, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10621044, "oldest_snapshot_seqno": -1}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 3970 keys, 8438001 bytes, temperature: kUnknown
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176729595, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8438001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8408564, "index_size": 18383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 96251, "raw_average_key_size": 24, "raw_value_size": 8333934, "raw_average_value_size": 2099, "num_data_blocks": 794, "num_entries": 3970, "num_filter_entries": 3970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.729835) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8438001 bytes
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.775788) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.0 rd, 37.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.5) write-amplify(2.9) OK, records in: 4487, records dropped: 517 output_compression: NoCompression
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.775824) EVENT_LOG_v1 {"time_micros": 1759406176775810, "job": 10, "event": "compaction_finished", "compaction_time_micros": 225919, "compaction_time_cpu_micros": 38747, "output_level": 6, "num_output_files": 1, "total_output_size": 8438001, "num_input_records": 4487, "num_output_records": 3970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176776319, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176777376, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.503638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:56:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:16.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:17.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:17 np0005466030 podman[137273]: 2025-10-02 11:56:17.344729501 +0000 UTC m=+9.516128662 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:56:17 np0005466030 podman[137465]: 2025-10-02 11:56:17.377117118 +0000 UTC m=+1.763739289 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:56:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 07:56:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  2 07:56:17 np0005466030 podman[137554]: 2025-10-02 11:56:17.490599853 +0000 UTC m=+0.045908833 container create 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:56:17 np0005466030 podman[137554]: 2025-10-02 11:56:17.466024481 +0000 UTC m=+0.021333481 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:56:17 np0005466030 python3[137262]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 07:56:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:56:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:19.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:56:19 np0005466030 python3.9[137744]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:56:20 np0005466030 python3.9[137898]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:21 np0005466030 python3.9[137974]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 07:56:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:21 np0005466030 python3.9[138125]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406181.1276753-1304-34792720281342/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:22 np0005466030 python3.9[138201]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:56:22 np0005466030 systemd[1]: Reloading.
Oct  2 07:56:22 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:22 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:23 np0005466030 python3.9[138312]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:23 np0005466030 systemd[1]: Reloading.
Oct  2 07:56:23 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:23 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:23.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:23 np0005466030 systemd[1]: Starting ovn_metadata_agent container...
Oct  2 07:56:23 np0005466030 systemd[1]: Started libcrun container.
Oct  2 07:56:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7524752fc5ca5448b79e867a921e886f022ba26ab05152b27e2c373874c4c6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  2 07:56:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7524752fc5ca5448b79e867a921e886f022ba26ab05152b27e2c373874c4c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 07:56:23 np0005466030 systemd[1]: Started /usr/bin/podman healthcheck run 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd.
Oct  2 07:56:23 np0005466030 podman[138354]: 2025-10-02 11:56:23.868207465 +0000 UTC m=+0.278720426 container init 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + sudo -E kolla_set_configs
Oct  2 07:56:23 np0005466030 podman[138354]: 2025-10-02 11:56:23.893993186 +0000 UTC m=+0.304506137 container start 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 07:56:23 np0005466030 edpm-start-podman-container[138354]: ovn_metadata_agent
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Validating config file
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Copying service configuration files
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  2 07:56:23 np0005466030 edpm-start-podman-container[138353]: Creating additional drop-in dependency for "ovn_metadata_agent" (0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd)
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Writing out command to execute
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: ++ cat /run_command
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + CMD=neutron-ovn-metadata-agent
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + ARGS=
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + sudo kolla_copy_cacerts
Oct  2 07:56:23 np0005466030 systemd[1]: Reloading.
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + [[ ! -n '' ]]
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + . kolla_extend_start
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: Running command: 'neutron-ovn-metadata-agent'
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + umask 0022
Oct  2 07:56:23 np0005466030 ovn_metadata_agent[138369]: + exec neutron-ovn-metadata-agent
Oct  2 07:56:23 np0005466030 podman[138376]: 2025-10-02 11:56:23.990031753 +0000 UTC m=+0.085510117 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 07:56:24 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:24 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:24 np0005466030 systemd[1]: Started ovn_metadata_agent container.
Oct  2 07:56:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:24.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:25.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:25 np0005466030 systemd[1]: session-48.scope: Deactivated successfully.
Oct  2 07:56:25 np0005466030 systemd[1]: session-48.scope: Consumed 53.170s CPU time.
Oct  2 07:56:25 np0005466030 systemd-logind[795]: Session 48 logged out. Waiting for processes to exit.
Oct  2 07:56:25 np0005466030 systemd-logind[795]: Removed session 48.
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.854 138374 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.854 138374 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.854 138374 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.897 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.897 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.906 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.906 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.906 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.907 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.907 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  2 07:56:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.920 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name db222192-8da1-4f7c-972d-dc680c3e6630 (UUID: db222192-8da1-4f7c-972d-dc680c3e6630) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.941 138374 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.941 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.942 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.942 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.944 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.950 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.955 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'db222192-8da1-4f7c-972d-dc680c3e6630'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], external_ids={}, name=db222192-8da1-4f7c-972d-dc680c3e6630, nb_cfg_timestamp=1759406123485, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.956 138374 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f23cc43bf40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.956 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.957 138374 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.961 138374 DEBUG oslo_service.service [-] Started child 138528 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.965 138374 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpm5m4yfxz/privsep.sock']#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.965 138528 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-167138'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.985 138528 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.985 138528 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.985 138528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.988 138528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  2 07:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.994 138528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.999 138528 INFO eventlet.wsgi.server [-] (138528) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  2 07:56:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:56:26 np0005466030 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.668 138374 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.668 138374 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm5m4yfxz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.516 138533 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.521 138533 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.523 138533 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.523 138533 INFO oslo.privsep.daemon [-] privsep daemon running as pid 138533#033[00m
Oct  2 07:56:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.672 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[72ef5f16-2609-49f3-a6a9-4020a4bcf2b4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:56:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:26.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.216 138533 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.216 138533 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.216 138533 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:56:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:27.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.791 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f9d557-4034-416b-aaf4-c9eef53b5435]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, column=external_ids, values=({'neutron:ovn-metadata-id': 'fbafd4ac-feb6-5c3f-8139-621c52ba192d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.801 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 07:56:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 07:56:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:29.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:30 np0005466030 systemd-logind[795]: New session 49 of user zuul.
Oct  2 07:56:30 np0005466030 systemd[1]: Started Session 49 of User zuul.
Oct  2 07:56:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:31.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:31 np0005466030 python3.9[138692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 07:56:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:33 np0005466030 python3.9[138848]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:56:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:33.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:34 np0005466030 python3.9[139013]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:56:34 np0005466030 systemd[1]: Reloading.
Oct  2 07:56:34 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:56:34 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:56:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:35 np0005466030 python3.9[139197]: ansible-ansible.builtin.service_facts Invoked
Oct  2 07:56:35 np0005466030 network[139214]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 07:56:35 np0005466030 network[139215]: 'network-scripts' will be removed from distribution in near future.
Oct  2 07:56:35 np0005466030 network[139216]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 07:56:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:36.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:38.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:40.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:41.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:42 np0005466030 python3.9[139481]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:42 np0005466030 python3.9[139634]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:43.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:43 np0005466030 python3.9[139787]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:44 np0005466030 python3.9[139940]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:45.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:45 np0005466030 python3.9[140093]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:45.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:46 np0005466030 python3.9[140246]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:47 np0005466030 python3.9[140399]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 07:56:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:47.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:47 np0005466030 podman[140425]: 2025-10-02 11:56:47.854146335 +0000 UTC m=+0.108445529 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 07:56:48 np0005466030 python3.9[140578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:49 np0005466030 python3.9[140730]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:49.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:50 np0005466030 python3.9[140883]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:50 np0005466030 python3.9[141035]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:51 np0005466030 python3.9[141187]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:52 np0005466030 python3.9[141339]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:52 np0005466030 python3.9[141491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:53.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:53 np0005466030 python3.9[141643]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:54 np0005466030 podman[141767]: 2025-10-02 11:56:54.31746278 +0000 UTC m=+0.050811097 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  2 07:56:54 np0005466030 python3.9[141814]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:55.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:55 np0005466030 python3.9[141966]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:56:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:55.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:56:55 np0005466030 python3.9[142118]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:56:56 np0005466030 python3.9[142270]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:57 np0005466030 python3.9[142422]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:57.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:57 np0005466030 python3.9[142574]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:56:58 np0005466030 python3.9[142726]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:56:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:59.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:56:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:56:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:59.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:56:59 np0005466030 python3.9[142878]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 07:57:00 np0005466030 python3.9[143030]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 07:57:00 np0005466030 systemd[1]: Reloading.
Oct  2 07:57:00 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:57:00 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:57:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:01.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:01.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:01 np0005466030 python3.9[143218]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:02 np0005466030 python3.9[143371]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:02 np0005466030 python3.9[143524]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:03.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:03.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:03 np0005466030 python3.9[143677]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:04 np0005466030 python3.9[143830]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:04 np0005466030 python3.9[143983]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:05.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:05 np0005466030 python3.9[144136]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 07:57:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:05.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:06 np0005466030 python3.9[144289]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  2 07:57:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:07.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:07.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:07 np0005466030 python3.9[144442]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 07:57:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:09.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:09 np0005466030 python3.9[144600]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 07:57:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:09.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:10 np0005466030 python3.9[144760]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 07:57:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:11 np0005466030 python3.9[144844]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 07:57:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:11.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:13.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:13.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:15.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:17.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:17.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:18 np0005466030 podman[144855]: 2025-10-02 11:57:18.870171799 +0000 UTC m=+0.110850470 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 07:57:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:19.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:19.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:21.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:57:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:23.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:57:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:23.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:24 np0005466030 podman[145008]: 2025-10-02 11:57:24.795642176 +0000 UTC m=+0.053429312 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:57:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:25.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:57:25.899 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:57:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:57:25.899 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:57:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:57:25.899 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:57:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:27.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:27.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 07:57:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:57:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:57:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:57:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:29.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:29.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:31.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:33.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:35.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:35.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:57:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:57:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:37.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:39.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:41.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:43.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:45.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:46 np0005466030 kernel: SELinux:  Converting 2765 SID table entries...
Oct  2 07:57:46 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:57:46 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:57:46 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:57:46 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:57:46 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:57:46 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:57:46 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:57:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:47.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:57:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:57:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:49.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:49 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  2 07:57:49 np0005466030 podman[145271]: 2025-10-02 11:57:49.851152369 +0000 UTC m=+0.103015263 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 07:57:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:51.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:57:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:51.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:57:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:57:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:53.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:57:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:55 np0005466030 podman[145297]: 2025-10-02 11:57:55.811182244 +0000 UTC m=+0.057134459 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 07:57:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:57:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:57.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:57.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:58 np0005466030 kernel: SELinux:  Converting 2765 SID table entries...
Oct  2 07:57:58 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:57:58 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:57:58 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:57:58 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:57:58 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:57:58 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:57:58 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:57:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:59.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:57:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:57:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:57:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:59.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:58:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:01.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:58:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:05.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:07.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:09.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:11.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:11.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:13.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:15.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:17.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:58:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:19.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:58:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:20 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  2 07:58:20 np0005466030 podman[151942]: 2025-10-02 11:58:20.838407398 +0000 UTC m=+0.083478038 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 07:58:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:23.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:25.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:25.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:58:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:58:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:58:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:58:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:58:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:58:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:26 np0005466030 podman[156255]: 2025-10-02 11:58:26.786920381 +0000 UTC m=+0.043283348 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 07:58:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:27.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:27.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:31.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:33.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 07:58:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:35.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 07:58:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:58:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:58:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:58:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:39.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:39.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:41.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:58:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:58:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:43.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:47.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:47 np0005466030 kernel: SELinux:  Converting 2766 SID table entries...
Oct  2 07:58:47 np0005466030 kernel: SELinux:  policy capability network_peer_controls=1
Oct  2 07:58:47 np0005466030 kernel: SELinux:  policy capability open_perms=1
Oct  2 07:58:47 np0005466030 kernel: SELinux:  policy capability extended_socket_class=1
Oct  2 07:58:47 np0005466030 kernel: SELinux:  policy capability always_check_network=0
Oct  2 07:58:47 np0005466030 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  2 07:58:47 np0005466030 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  2 07:58:47 np0005466030 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  2 07:58:48 np0005466030 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  2 07:58:48 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  2 07:58:48 np0005466030 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct  2 07:58:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:51 np0005466030 podman[162368]: 2025-10-02 11:58:51.851063056 +0000 UTC m=+0.096297659 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 07:58:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:53.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:53.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:58:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:55.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:58:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:58:56 np0005466030 systemd[1]: Stopping OpenSSH server daemon...
Oct  2 07:58:56 np0005466030 systemd[1]: sshd.service: Deactivated successfully.
Oct  2 07:58:56 np0005466030 systemd[1]: Stopped OpenSSH server daemon.
Oct  2 07:58:56 np0005466030 systemd[1]: sshd.service: Consumed 1.855s CPU time, read 0B from disk, written 4.0K to disk.
Oct  2 07:58:56 np0005466030 systemd[1]: Stopped target sshd-keygen.target.
Oct  2 07:58:56 np0005466030 systemd[1]: Stopping sshd-keygen.target...
Oct  2 07:58:56 np0005466030 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:58:56 np0005466030 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:58:56 np0005466030 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  2 07:58:56 np0005466030 systemd[1]: Reached target sshd-keygen.target.
Oct  2 07:58:56 np0005466030 systemd[1]: Starting OpenSSH server daemon...
Oct  2 07:58:56 np0005466030 systemd[1]: Started OpenSSH server daemon.
Oct  2 07:58:56 np0005466030 podman[163287]: 2025-10-02 11:58:56.882166741 +0000 UTC m=+0.053474078 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 07:58:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:57.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:57.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:57 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 07:58:57 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 07:58:57 np0005466030 systemd[1]: Reloading.
Oct  2 07:58:57 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:58:57 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:58:58 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 07:58:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:59.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:58:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:58:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:58:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:01.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:02 np0005466030 systemd[1]: Starting PackageKit Daemon...
Oct  2 07:59:02 np0005466030 systemd[1]: Started PackageKit Daemon.
Oct  2 07:59:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:05 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 07:59:05 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 07:59:05 np0005466030 systemd[1]: man-db-cache-update.service: Consumed 9.937s CPU time.
Oct  2 07:59:05 np0005466030 systemd[1]: run-r7427ec1af2a84507a90f10b5b61268cc.service: Deactivated successfully.
Oct  2 07:59:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:05.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:07.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:08 np0005466030 python3.9[171847]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:08 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:08 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:08 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:09.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:09 np0005466030 python3.9[172036]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:09 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:09 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:09 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:09.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:10 np0005466030 python3.9[172226]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:10 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:10 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:10 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:11.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:11.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:11 np0005466030 python3.9[172416]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:11 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:11 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:11 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:13 np0005466030 python3.9[172605]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:13 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:13.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:13 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:13 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:13.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:14 np0005466030 python3.9[172795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:14 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:14 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:14 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:15.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:15 np0005466030 python3.9[172985]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:15 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:15 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:15 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:15.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:16 np0005466030 python3.9[173175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:17.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:17 np0005466030 python3.9[173330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:17 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:17 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:17 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:17.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:18 np0005466030 python3.9[173519]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  2 07:59:18 np0005466030 systemd[1]: Reloading.
Oct  2 07:59:18 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 07:59:18 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 07:59:18 np0005466030 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  2 07:59:19 np0005466030 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  2 07:59:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:19.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:19.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:19 np0005466030 python3.9[173712]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:20 np0005466030 python3.9[173867]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:21 np0005466030 python3.9[174022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:21.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:22 np0005466030 podman[174177]: 2025-10-02 11:59:22.005001264 +0000 UTC m=+0.082194909 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 07:59:22 np0005466030 python3.9[174178]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:23 np0005466030 python3.9[174360]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:23.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:23 np0005466030 python3.9[174515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:24 np0005466030 python3.9[174670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:25 np0005466030 python3.9[174825]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:25.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:59:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 07:59:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:59:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 07:59:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 11:59:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 07:59:26 np0005466030 python3.9[174980]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:26 np0005466030 python3.9[175135]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:27.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:27 np0005466030 podman[175262]: 2025-10-02 11:59:27.275982405 +0000 UTC m=+0.052861627 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 07:59:27 np0005466030 python3.9[175306]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:27.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:28 np0005466030 python3.9[175464]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:29 np0005466030 python3.9[175619]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:29.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:29.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:29 np0005466030 python3.9[175774]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  2 07:59:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:31.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:32 np0005466030 python3.9[175929]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:33 np0005466030 python3.9[176081]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:33.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:33.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:33 np0005466030 python3.9[176233]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:34 np0005466030 python3.9[176385]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:34 np0005466030 python3.9[176537]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:35 np0005466030 python3.9[176689]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 07:59:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:35.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:37.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:37 np0005466030 python3.9[176841]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:38 np0005466030 python3.9[176966]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406376.6771562-1628-261065329655687/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:38 np0005466030 python3.9[177118]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:39 np0005466030 python3.9[177243]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406378.1532433-1628-248017595930972/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:39 np0005466030 python3.9[177395]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:40 np0005466030 python3.9[177520]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406379.3667595-1628-58310110771014/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:41 np0005466030 python3.9[177672]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:41.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:41 np0005466030 python3.9[177797]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406380.552646-1628-59987462646004/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:41.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:42 np0005466030 python3.9[177949]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:42 np0005466030 python3.9[178074]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406381.7061007-1628-184955012675798/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:43.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:43 np0005466030 podman[178397]: 2025-10-02 11:59:43.434595797 +0000 UTC m=+0.146129188 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 07:59:43 np0005466030 python3.9[178406]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:43 np0005466030 podman[178397]: 2025-10-02 11:59:43.528704174 +0000 UTC m=+0.240237525 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  2 07:59:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:43.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:44 np0005466030 python3.9[178619]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406383.0405846-1628-266298787189070/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:44 np0005466030 python3.9[178916]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:45 np0005466030 python3.9[179056]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406384.266439-1628-269313726276815/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:45.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:45.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:45 np0005466030 python3.9[179208]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 07:59:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 07:59:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 07:59:46 np0005466030 python3.9[179333]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406385.3850121-1628-168034311419829/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:47.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:47.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:48 np0005466030 python3.9[179485]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  2 07:59:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:49 np0005466030 python3.9[179638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.642029) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389642059, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2186, "num_deletes": 253, "total_data_size": 5565293, "memory_usage": 5643816, "flush_reason": "Manual Compaction"}
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389664126, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3645687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12352, "largest_seqno": 14533, "table_properties": {"data_size": 3636755, "index_size": 5618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16653, "raw_average_key_size": 18, "raw_value_size": 3619026, "raw_average_value_size": 4080, "num_data_blocks": 251, "num_entries": 887, "num_filter_entries": 887, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406177, "oldest_key_time": 1759406177, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 22207 microseconds, and 7449 cpu microseconds.
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.664232) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3645687 bytes OK
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.664255) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.665975) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.665998) EVENT_LOG_v1 {"time_micros": 1759406389665991, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.666016) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5555606, prev total WAL file size 5555606, number of live WAL files 2.
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.667492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3560KB)], [24(8240KB)]
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389667543, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12083688, "oldest_snapshot_seqno": -1}
Oct  2 07:59:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:49.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4333 keys, 11528601 bytes, temperature: kUnknown
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389733876, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11528601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11493930, "index_size": 22721, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105264, "raw_average_key_size": 24, "raw_value_size": 11410070, "raw_average_value_size": 2633, "num_data_blocks": 969, "num_entries": 4333, "num_filter_entries": 4333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.734091) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11528601 bytes
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.735551) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 173.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.5) write-amplify(3.2) OK, records in: 4857, records dropped: 524 output_compression: NoCompression
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.735568) EVENT_LOG_v1 {"time_micros": 1759406389735559, "job": 12, "event": "compaction_finished", "compaction_time_micros": 66407, "compaction_time_cpu_micros": 23966, "output_level": 6, "num_output_files": 1, "total_output_size": 11528601, "num_input_records": 4857, "num_output_records": 4333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389736237, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389737526, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.667447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 07:59:50 np0005466030 python3.9[179790]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:50 np0005466030 python3.9[179942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:51 np0005466030 python3.9[180094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:51.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:52 np0005466030 python3.9[180246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:52 np0005466030 podman[180370]: 2025-10-02 11:59:52.61433803 +0000 UTC m=+0.093851909 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 07:59:52 np0005466030 python3.9[180411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:53.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:53 np0005466030 python3.9[180624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 07:59:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:53 np0005466030 python3.9[180776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:54 np0005466030 python3.9[180928]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:55 np0005466030 python3.9[181080]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:55 np0005466030 python3.9[181232]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:56 np0005466030 python3.9[181384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 07:59:56 np0005466030 python3.9[181536]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:57.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:57 np0005466030 podman[181660]: 2025-10-02 11:59:57.386798121 +0000 UTC m=+0.052699002 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 07:59:57 np0005466030 python3.9[181707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 07:59:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:57.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 07:59:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 07:59:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 07:59:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 07:59:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 07:59:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:59.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:00 np0005466030 python3.9[181859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:00 np0005466030 python3.9[181982]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406399.5549986-2291-102109785365175/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 08:00:01 np0005466030 python3.9[182134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:01.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:01 np0005466030 python3.9[182257]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406400.737095-2291-174063598240378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:02 np0005466030 python3.9[182409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:02 np0005466030 python3.9[182532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406401.8630579-2291-105962517836569/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:03.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:03 np0005466030 python3.9[182684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:00:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:03.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:00:04 np0005466030 python3.9[182807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406403.0453568-2291-105942500914016/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:04 np0005466030 python3.9[182959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:05 np0005466030 python3.9[183082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406404.2424357-2291-47537395873895/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:05.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:05 np0005466030 python3.9[183234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:06 np0005466030 python3.9[183357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406405.4242322-2291-226981980865608/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:07 np0005466030 python3.9[183509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:07.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:07 np0005466030 python3.9[183632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406406.6253943-2291-100206145998702/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:07.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:08 np0005466030 python3.9[183784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:08 np0005466030 python3.9[183907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406407.751486-2291-218070813104067/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:09.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:09 np0005466030 python3.9[184059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:09 np0005466030 python3.9[184182]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406408.901911-2291-162670625109885/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:10 np0005466030 python3.9[184334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:11 np0005466030 python3.9[184457]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406410.0764315-2291-62862961791825/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:11.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:11 np0005466030 python3.9[184609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:11.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:12 np0005466030 python3.9[184732]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406411.1485138-2291-236579093308532/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:12 np0005466030 python3.9[184884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:13.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:13 np0005466030 python3.9[185007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406412.4347644-2291-220416300987845/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:13.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:13 np0005466030 python3.9[185159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:14 np0005466030 python3.9[185282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406413.5511267-2291-73885059226937/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:15 np0005466030 python3.9[185434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:15 np0005466030 python3.9[185557]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406414.599189-2291-276667802500644/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:17.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:17.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:18 np0005466030 python3.9[185707]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:19.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:19 np0005466030 python3.9[185862]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  2 08:00:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:22 np0005466030 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  2 08:00:22 np0005466030 podman[185867]: 2025-10-02 12:00:22.845220656 +0000 UTC m=+0.084541519 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:00:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:23.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:23 np0005466030 python3.9[186045]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:24 np0005466030 python3.9[186197]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:24 np0005466030 python3.9[186349]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:25 np0005466030 python3.9[186501]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:25.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:25 np0005466030 python3.9[186653]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:00:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:00:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:00:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:00:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:00:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:00:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:26 np0005466030 python3.9[186805]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:27.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:27 np0005466030 python3.9[186957]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:27 np0005466030 podman[186958]: 2025-10-02 12:00:27.611950857 +0000 UTC m=+0.042118050 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:00:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:27.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:28 np0005466030 python3.9[187127]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:28 np0005466030 python3.9[187279]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:29.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:29 np0005466030 python3.9[187431]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:30 np0005466030 python3.9[187583]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:30 np0005466030 systemd[1]: Reloading.
Oct  2 08:00:30 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:30 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:31 np0005466030 systemd[1]: Starting libvirt logging daemon socket...
Oct  2 08:00:31 np0005466030 systemd[1]: Listening on libvirt logging daemon socket.
Oct  2 08:00:31 np0005466030 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  2 08:00:31 np0005466030 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  2 08:00:31 np0005466030 systemd[1]: Starting libvirt logging daemon...
Oct  2 08:00:31 np0005466030 systemd[1]: Started libvirt logging daemon.
Oct  2 08:00:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:31 np0005466030 python3.9[187775]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:31 np0005466030 systemd[1]: Reloading.
Oct  2 08:00:32 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:32 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:32 np0005466030 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  2 08:00:32 np0005466030 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  2 08:00:32 np0005466030 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  2 08:00:32 np0005466030 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  2 08:00:32 np0005466030 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  2 08:00:32 np0005466030 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  2 08:00:32 np0005466030 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 08:00:32 np0005466030 systemd[1]: Started libvirt nodedev daemon.
Oct  2 08:00:33 np0005466030 python3.9[187989]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:33 np0005466030 systemd[1]: Reloading.
Oct  2 08:00:33 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:33 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:33 np0005466030 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  2 08:00:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:33.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:33 np0005466030 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  2 08:00:33 np0005466030 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  2 08:00:33 np0005466030 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  2 08:00:33 np0005466030 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  2 08:00:33 np0005466030 systemd[1]: Starting libvirt proxy daemon...
Oct  2 08:00:33 np0005466030 systemd[1]: Started libvirt proxy daemon.
Oct  2 08:00:33 np0005466030 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  2 08:00:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:33.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:33 np0005466030 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  2 08:00:33 np0005466030 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  2 08:00:34 np0005466030 python3.9[188206]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:34 np0005466030 systemd[1]: Reloading.
Oct  2 08:00:34 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:34 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:34 np0005466030 systemd[1]: Listening on libvirt locking daemon socket.
Oct  2 08:00:34 np0005466030 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  2 08:00:34 np0005466030 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  2 08:00:34 np0005466030 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  2 08:00:34 np0005466030 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  2 08:00:34 np0005466030 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  2 08:00:34 np0005466030 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  2 08:00:34 np0005466030 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  2 08:00:34 np0005466030 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  2 08:00:34 np0005466030 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  2 08:00:34 np0005466030 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 08:00:34 np0005466030 systemd[1]: Started libvirt QEMU daemon.
Oct  2 08:00:34 np0005466030 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1f1a135-9903-412b-a227-3feac8724652
Oct  2 08:00:34 np0005466030 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 08:00:34 np0005466030 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1f1a135-9903-412b-a227-3feac8724652
Oct  2 08:00:34 np0005466030 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  2 08:00:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:35.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:35 np0005466030 python3.9[188421]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:00:35 np0005466030 systemd[1]: Reloading.
Oct  2 08:00:35 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:00:35 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:00:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:35 np0005466030 systemd[1]: Starting libvirt secret daemon socket...
Oct  2 08:00:35 np0005466030 systemd[1]: Listening on libvirt secret daemon socket.
Oct  2 08:00:35 np0005466030 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  2 08:00:35 np0005466030 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  2 08:00:35 np0005466030 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  2 08:00:35 np0005466030 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  2 08:00:35 np0005466030 systemd[1]: Starting libvirt secret daemon...
Oct  2 08:00:35 np0005466030 systemd[1]: Started libvirt secret daemon.
Oct  2 08:00:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:37 np0005466030 python3.9[188630]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:37.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:37 np0005466030 python3.9[188782]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 08:00:38 np0005466030 python3.9[188934]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:39 np0005466030 python3.9[189088]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 08:00:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:39.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:40 np0005466030 python3.9[189238]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:40 np0005466030 python3.9[189359]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406439.973228-3365-113231454245175/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9d9565ec21a9799171bafbb06d2141d5e5510d7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:41 np0005466030 python3.9[189511]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 20fdc58c-b037-5094-a8ef-d490aa7c36f3#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:41.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:42 np0005466030 python3.9[189673]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:43.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:44 np0005466030 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  2 08:00:44 np0005466030 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  2 08:00:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:45.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:45.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:45 np0005466030 python3.9[190136]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:46 np0005466030 python3.9[190288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:47 np0005466030 python3.9[190411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406446.1559014-3530-28948947166296/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:47.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:48 np0005466030 python3.9[190563]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:49 np0005466030 python3.9[190715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:49 np0005466030 python3.9[190793]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:50 np0005466030 python3.9[190945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:50 np0005466030 python3.9[191023]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.emaho68x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:51 np0005466030 python3.9[191175]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:00:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:00:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:51 np0005466030 python3.9[191253]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:52 np0005466030 python3.9[191405]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:00:53 np0005466030 podman[191456]: 2025-10-02 12:00:53.076854999 +0000 UTC m=+0.093210620 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:00:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:53 np0005466030 auditd[703]: Audit daemon rotating log files
Oct  2 08:00:53 np0005466030 python3[191700]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  2 08:00:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:54 np0005466030 python3.9[191868]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:54 np0005466030 python3.9[191946]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:00:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:00:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:00:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:55.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:55 np0005466030 python3.9[192098]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:56 np0005466030 python3.9[192176]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:00:57 np0005466030 python3.9[192328]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:57 np0005466030 python3.9[192406]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:00:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:57.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:00:57 np0005466030 podman[192407]: 2025-10-02 12:00:57.827596779 +0000 UTC m=+0.073550725 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:00:58 np0005466030 python3.9[192577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:00:59 np0005466030 python3.9[192655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:00:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:59.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:00:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:00:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:00:59 np0005466030 python3.9[192807]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:00 np0005466030 python3.9[192932]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406459.329464-3905-235630255068616/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:01 np0005466030 python3.9[193145]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:01:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:01:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:02 np0005466030 python3.9[193297]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:01:03 np0005466030 python3.9[193452]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:03.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:03 np0005466030 python3.9[193604]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:01:04 np0005466030 python3.9[193757]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:05 np0005466030 python3.9[193911]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:01:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:05.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:05.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:06 np0005466030 python3.9[194066]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:06 np0005466030 python3.9[194218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:07 np0005466030 python3.9[194341]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406466.4714315-4121-3569866354244/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:07.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:08 np0005466030 python3.9[194493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:08 np0005466030 python3.9[194616]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406467.8983197-4166-202496219478751/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:09.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:09 np0005466030 python3.9[194768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:10 np0005466030 python3.9[194891]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406469.2594056-4211-25809470708168/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:11 np0005466030 python3.9[195043]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:11 np0005466030 systemd[1]: Reloading.
Oct  2 08:01:11 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:11 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:11.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:11 np0005466030 systemd[1]: Reached target edpm_libvirt.target.
Oct  2 08:01:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:12 np0005466030 python3.9[195233]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  2 08:01:12 np0005466030 systemd[1]: Reloading.
Oct  2 08:01:12 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:12 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:13 np0005466030 systemd[1]: Reloading.
Oct  2 08:01:13 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:13 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:13.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:13 np0005466030 systemd[1]: session-49.scope: Deactivated successfully.
Oct  2 08:01:13 np0005466030 systemd[1]: session-49.scope: Consumed 3min 24.005s CPU time.
Oct  2 08:01:13 np0005466030 systemd-logind[795]: Session 49 logged out. Waiting for processes to exit.
Oct  2 08:01:13 np0005466030 systemd-logind[795]: Removed session 49.
Oct  2 08:01:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:15.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:15.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:17.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:17.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:18 np0005466030 systemd-logind[795]: New session 50 of user zuul.
Oct  2 08:01:18 np0005466030 systemd[1]: Started Session 50 of User zuul.
Oct  2 08:01:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:19.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:19 np0005466030 python3.9[195482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 08:01:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:01:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:19.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:01:21 np0005466030 python3.9[195638]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:21.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:21 np0005466030 python3.9[195790]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:22 np0005466030 python3.9[195942]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:23 np0005466030 python3.9[196094]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 08:01:23 np0005466030 podman[196095]: 2025-10-02 12:01:23.255561051 +0000 UTC m=+0.106647361 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:23 np0005466030 python3.9[196272]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:23.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:24 np0005466030 python3.9[196424]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:25.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:01:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:01:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:01:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:01:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:01:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:01:26 np0005466030 python3.9[196578]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:26 np0005466030 systemd[1]: Reloading.
Oct  2 08:01:26 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:26 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:27 np0005466030 python3.9[196766]: ansible-ansible.builtin.service_facts Invoked
Oct  2 08:01:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:27 np0005466030 network[196783]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 08:01:27 np0005466030 network[196784]: 'network-scripts' will be removed from distribution in near future.
Oct  2 08:01:27 np0005466030 network[196785]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 08:01:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:27.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:28 np0005466030 podman[196791]: 2025-10-02 12:01:28.457050677 +0000 UTC m=+0.053320791 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:01:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:29.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:29.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:31.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:31 np0005466030 python3.9[197077]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:32 np0005466030 systemd[1]: Reloading.
Oct  2 08:01:32 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:32 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:33 np0005466030 python3.9[197263]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:33.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:34 np0005466030 python3.9[197415]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 08:01:34 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:01:34 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:01:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:35.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:35 np0005466030 podman[197427]: 2025-10-02 12:01:35.515963812 +0000 UTC m=+1.296068030 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:01:35 np0005466030 podman[197486]: 2025-10-02 12:01:35.645124676 +0000 UTC m=+0.043483492 container create 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.6635] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct  2 08:01:35 np0005466030 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 08:01:35 np0005466030 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 08:01:35 np0005466030 kernel: veth0: entered allmulticast mode
Oct  2 08:01:35 np0005466030 kernel: veth0: entered promiscuous mode
Oct  2 08:01:35 np0005466030 kernel: podman0: port 1(veth0) entered blocking state
Oct  2 08:01:35 np0005466030 kernel: podman0: port 1(veth0) entered forwarding state
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.6810] device (veth0): carrier: link connected
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.6812] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.6818] device (podman0): carrier: link connected
Oct  2 08:01:35 np0005466030 systemd-udevd[197514]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:01:35 np0005466030 systemd-udevd[197518]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7109] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7116] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7122] device (podman0): Activation: starting connection 'podman0' (9089a157-fe13-4347-90b3-a7891003e76b)
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7142] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7144] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7145] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7146] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466030 podman[197486]: 2025-10-02 12:01:35.625757291 +0000 UTC m=+0.024116137 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:01:35 np0005466030 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  2 08:01:35 np0005466030 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7405] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7408] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  2 08:01:35 np0005466030 NetworkManager[44960]: <info>  [1759406495.7415] device (podman0): Activation: successful, device activated.
Oct  2 08:01:35 np0005466030 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  2 08:01:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:35.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:35 np0005466030 systemd[1]: Started libpod-conmon-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646.scope.
Oct  2 08:01:35 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:01:35 np0005466030 podman[197486]: 2025-10-02 12:01:35.980019045 +0000 UTC m=+0.378377881 container init 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:01:35 np0005466030 podman[197486]: 2025-10-02 12:01:35.988216172 +0000 UTC m=+0.386574988 container start 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:01:35 np0005466030 iscsid_config[197643]: iqn.1994-05.com.redhat:d783e47ecf#015
Oct  2 08:01:35 np0005466030 systemd[1]: libpod-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646.scope: Deactivated successfully.
Oct  2 08:01:36 np0005466030 podman[197486]: 2025-10-02 12:01:36.006416161 +0000 UTC m=+0.404774997 container attach 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:01:36 np0005466030 podman[197486]: 2025-10-02 12:01:36.008221318 +0000 UTC m=+0.406580134 container died 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:36 np0005466030 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 08:01:36 np0005466030 kernel: veth0 (unregistering): left allmulticast mode
Oct  2 08:01:36 np0005466030 kernel: veth0 (unregistering): left promiscuous mode
Oct  2 08:01:36 np0005466030 kernel: podman0: port 1(veth0) entered disabled state
Oct  2 08:01:36 np0005466030 NetworkManager[44960]: <info>  [1759406496.1104] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:01:36 np0005466030 systemd[1]: run-netns-netns\x2dba03a981\x2db8af\x2d541a\x2d970f\x2d5f3cb3a85b55.mount: Deactivated successfully.
Oct  2 08:01:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646-userdata-shm.mount: Deactivated successfully.
Oct  2 08:01:36 np0005466030 podman[197486]: 2025-10-02 12:01:36.515295167 +0000 UTC m=+0.913653983 container remove 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:36 np0005466030 python3.9[197415]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct  2 08:01:36 np0005466030 systemd[1]: libpod-conmon-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646.scope: Deactivated successfully.
Oct  2 08:01:36 np0005466030 python3.9[197415]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  2 08:01:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay-18a9268dc48256ea007b3d40fce05292ef3b4d3f1b2689cc7f82d07808bb7a33-merged.mount: Deactivated successfully.
Oct  2 08:01:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:37.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:37.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:38 np0005466030 python3.9[197884]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:39 np0005466030 python3.9[198007]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406498.184692-323-233070866336056/.source.iscsi _original_basename=.fk2hqacq follow=False checksum=d140a8b25ccedd64545d8857068a83f3cc83c4ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:39.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:40 np0005466030 python3.9[198159]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:40 np0005466030 python3.9[198309]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:01:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:41 np0005466030 python3.9[198463]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:42 np0005466030 python3.9[198615]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:43 np0005466030 python3.9[198767]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:43 np0005466030 python3.9[198845]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:44 np0005466030 python3.9[198997]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.692256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504692323, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1520, "num_deletes": 501, "total_data_size": 2980063, "memory_usage": 3024888, "flush_reason": "Manual Compaction"}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504712636, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1162013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14538, "largest_seqno": 16053, "table_properties": {"data_size": 1157196, "index_size": 1765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14875, "raw_average_key_size": 19, "raw_value_size": 1144903, "raw_average_value_size": 1465, "num_data_blocks": 81, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406390, "oldest_key_time": 1759406390, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 20495 microseconds, and 4147 cpu microseconds.
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.712778) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1162013 bytes OK
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.712802) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.716329) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.716378) EVENT_LOG_v1 {"time_micros": 1759406504716367, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.716402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2972066, prev total WAL file size 2972066, number of live WAL files 2.
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.717354) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1134KB)], [27(10MB)]
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504717411, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12690614, "oldest_snapshot_seqno": -1}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4150 keys, 7897474 bytes, temperature: kUnknown
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504800109, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7897474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7868425, "index_size": 17547, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 102626, "raw_average_key_size": 24, "raw_value_size": 7792014, "raw_average_value_size": 1877, "num_data_blocks": 739, "num_entries": 4150, "num_filter_entries": 4150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.800327) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7897474 bytes
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.804314) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.3 rd, 95.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(17.7) write-amplify(6.8) OK, records in: 5114, records dropped: 964 output_compression: NoCompression
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.804335) EVENT_LOG_v1 {"time_micros": 1759406504804326, "job": 14, "event": "compaction_finished", "compaction_time_micros": 82757, "compaction_time_cpu_micros": 17412, "output_level": 6, "num_output_files": 1, "total_output_size": 7897474, "num_input_records": 5114, "num_output_records": 4150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504804594, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504806245, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.717218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:01:44 np0005466030 python3.9[199075]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:45 np0005466030 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 08:01:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:45.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:45.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:46 np0005466030 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  2 08:01:46 np0005466030 python3.9[199227]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:47 np0005466030 python3.9[199379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:47.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:47 np0005466030 python3.9[199457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:01:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:01:48 np0005466030 python3.9[199609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:48 np0005466030 python3.9[199687]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:49.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:50 np0005466030 python3.9[199839]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:50 np0005466030 systemd[1]: Reloading.
Oct  2 08:01:50 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:50 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:51 np0005466030 python3.9[200028]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:01:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:51.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:01:51 np0005466030 python3.9[200106]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:52 np0005466030 python3.9[200258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:52 np0005466030 python3.9[200336]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:53.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:53 np0005466030 podman[200460]: 2025-10-02 12:01:53.633336675 +0000 UTC m=+0.093574621 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:01:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:53 np0005466030 python3.9[200505]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:01:53 np0005466030 systemd[1]: Reloading.
Oct  2 08:01:53 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:01:53 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:01:54 np0005466030 systemd[1]: Starting Create netns directory...
Oct  2 08:01:54 np0005466030 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 08:01:54 np0005466030 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 08:01:54 np0005466030 systemd[1]: Finished Create netns directory.
Oct  2 08:01:55 np0005466030 python3.9[200707]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:55.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:55.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:55 np0005466030 python3.9[200859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:56 np0005466030 python3.9[200982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406515.5073578-785-166770813296322/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:01:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:57.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:57 np0005466030 python3.9[201134]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:01:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:58 np0005466030 python3.9[201286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:01:58 np0005466030 podman[201381]: 2025-10-02 12:01:58.723621026 +0000 UTC m=+0.058962702 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:01:58 np0005466030 python3.9[201425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406517.9081368-860-211046296052695/.source.json _original_basename=.uxz08149 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:59.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:01:59 np0005466030 python3.9[201578]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:01:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:01:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:01:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:01.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:02:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:02:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:02:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:02:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:02:02 np0005466030 python3.9[202136]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  2 08:02:02 np0005466030 python3.9[202288]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:02:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:03.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:03.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:03 np0005466030 python3.9[202440]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 08:02:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:06 np0005466030 python3[202619]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:02:06 np0005466030 podman[202658]: 2025-10-02 12:02:06.561226744 +0000 UTC m=+0.046465330 container create 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=iscsid, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:02:06 np0005466030 podman[202658]: 2025-10-02 12:02:06.540266847 +0000 UTC m=+0.025505453 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:02:06 np0005466030 python3[202619]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct  2 08:02:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:07.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:07 np0005466030 python3.9[202848]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:07.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:08 np0005466030 python3.9[203002]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:08 np0005466030 python3.9[203128]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:02:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:02:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:09 np0005466030 python3.9[203279]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406528.996802-1124-41831447520668/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:09.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:10 np0005466030 python3.9[203355]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:02:10 np0005466030 systemd[1]: Reloading.
Oct  2 08:02:10 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:10 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:11 np0005466030 python3.9[203465]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:02:11 np0005466030 systemd[1]: Reloading.
Oct  2 08:02:11 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:11 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:11 np0005466030 systemd[1]: Starting iscsid container...
Oct  2 08:02:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:11.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:11 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:02:11 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e74b51f65e3b362198b6608a9dfaa74005262918f6cba2ec69c451aa36d0aec/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  2 08:02:11 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e74b51f65e3b362198b6608a9dfaa74005262918f6cba2ec69c451aa36d0aec/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:02:11 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e74b51f65e3b362198b6608a9dfaa74005262918f6cba2ec69c451aa36d0aec/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:02:11 np0005466030 systemd[1]: Started /usr/bin/podman healthcheck run 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9.
Oct  2 08:02:11 np0005466030 podman[203505]: 2025-10-02 12:02:11.586333119 +0000 UTC m=+0.120364712 container init 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, container_name=iscsid)
Oct  2 08:02:11 np0005466030 iscsid[203521]: + sudo -E kolla_set_configs
Oct  2 08:02:11 np0005466030 podman[203505]: 2025-10-02 12:02:11.620708539 +0000 UTC m=+0.154740132 container start 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:02:11 np0005466030 podman[203505]: iscsid
Oct  2 08:02:11 np0005466030 systemd[1]: Started iscsid container.
Oct  2 08:02:11 np0005466030 systemd[1]: Created slice User Slice of UID 0.
Oct  2 08:02:11 np0005466030 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  2 08:02:11 np0005466030 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  2 08:02:11 np0005466030 systemd[1]: Starting User Manager for UID 0...
Oct  2 08:02:11 np0005466030 podman[203528]: 2025-10-02 12:02:11.69682985 +0000 UTC m=+0.062962959 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:02:11 np0005466030 systemd[1]: 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9-efbaf9eaede2ddd.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 08:02:11 np0005466030 systemd[1]: 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9-efbaf9eaede2ddd.service: Failed with result 'exit-code'.
Oct  2 08:02:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:11 np0005466030 systemd[203547]: Queued start job for default target Main User Target.
Oct  2 08:02:11 np0005466030 systemd[203547]: Created slice User Application Slice.
Oct  2 08:02:11 np0005466030 systemd[203547]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  2 08:02:11 np0005466030 systemd[203547]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:02:11 np0005466030 systemd[203547]: Reached target Paths.
Oct  2 08:02:11 np0005466030 systemd[203547]: Reached target Timers.
Oct  2 08:02:11 np0005466030 systemd[203547]: Starting D-Bus User Message Bus Socket...
Oct  2 08:02:11 np0005466030 systemd[203547]: Starting Create User's Volatile Files and Directories...
Oct  2 08:02:11 np0005466030 systemd[203547]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:02:11 np0005466030 systemd[203547]: Reached target Sockets.
Oct  2 08:02:11 np0005466030 systemd[203547]: Finished Create User's Volatile Files and Directories.
Oct  2 08:02:11 np0005466030 systemd[203547]: Reached target Basic System.
Oct  2 08:02:11 np0005466030 systemd[203547]: Reached target Main User Target.
Oct  2 08:02:11 np0005466030 systemd[203547]: Startup finished in 120ms.
Oct  2 08:02:11 np0005466030 systemd[1]: Started User Manager for UID 0.
Oct  2 08:02:11 np0005466030 systemd[1]: Started Session c3 of User root.
Oct  2 08:02:11 np0005466030 iscsid[203521]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:02:11 np0005466030 iscsid[203521]: INFO:__main__:Validating config file
Oct  2 08:02:11 np0005466030 iscsid[203521]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:02:11 np0005466030 iscsid[203521]: INFO:__main__:Writing out command to execute
Oct  2 08:02:11 np0005466030 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  2 08:02:11 np0005466030 iscsid[203521]: ++ cat /run_command
Oct  2 08:02:11 np0005466030 iscsid[203521]: + CMD='/usr/sbin/iscsid -f'
Oct  2 08:02:11 np0005466030 iscsid[203521]: + ARGS=
Oct  2 08:02:11 np0005466030 iscsid[203521]: + sudo kolla_copy_cacerts
Oct  2 08:02:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:11 np0005466030 systemd[1]: Started Session c4 of User root.
Oct  2 08:02:11 np0005466030 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  2 08:02:11 np0005466030 iscsid[203521]: + [[ ! -n '' ]]
Oct  2 08:02:11 np0005466030 iscsid[203521]: + . kolla_extend_start
Oct  2 08:02:11 np0005466030 iscsid[203521]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  2 08:02:11 np0005466030 iscsid[203521]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  2 08:02:11 np0005466030 iscsid[203521]: Running command: '/usr/sbin/iscsid -f'
Oct  2 08:02:11 np0005466030 iscsid[203521]: + umask 0022
Oct  2 08:02:11 np0005466030 iscsid[203521]: + exec /usr/sbin/iscsid -f
Oct  2 08:02:11 np0005466030 kernel: Loading iSCSI transport class v2.0-870.
Oct  2 08:02:13 np0005466030 python3.9[203727]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:13.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:13 np0005466030 python3.9[203879]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:15 np0005466030 python3.9[204031]: ansible-ansible.builtin.service_facts Invoked
Oct  2 08:02:15 np0005466030 network[204048]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 08:02:15 np0005466030 network[204049]: 'network-scripts' will be removed from distribution in near future.
Oct  2 08:02:15 np0005466030 network[204050]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 08:02:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:02:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:02:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:15.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:17.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:17.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:19.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:19.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:20 np0005466030 python3.9[204325]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 08:02:21 np0005466030 python3.9[204477]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  2 08:02:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:21.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:21.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:21 np0005466030 python3.9[204633]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:21 np0005466030 systemd[1]: Stopping User Manager for UID 0...
Oct  2 08:02:21 np0005466030 systemd[203547]: Activating special unit Exit the Session...
Oct  2 08:02:21 np0005466030 systemd[203547]: Stopped target Main User Target.
Oct  2 08:02:21 np0005466030 systemd[203547]: Stopped target Basic System.
Oct  2 08:02:21 np0005466030 systemd[203547]: Stopped target Paths.
Oct  2 08:02:21 np0005466030 systemd[203547]: Stopped target Sockets.
Oct  2 08:02:21 np0005466030 systemd[203547]: Stopped target Timers.
Oct  2 08:02:21 np0005466030 systemd[203547]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:02:21 np0005466030 systemd[203547]: Closed D-Bus User Message Bus Socket.
Oct  2 08:02:21 np0005466030 systemd[203547]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:02:21 np0005466030 systemd[203547]: Removed slice User Application Slice.
Oct  2 08:02:21 np0005466030 systemd[203547]: Reached target Shutdown.
Oct  2 08:02:21 np0005466030 systemd[203547]: Finished Exit the Session.
Oct  2 08:02:21 np0005466030 systemd[203547]: Reached target Exit the Session.
Oct  2 08:02:21 np0005466030 systemd[1]: user@0.service: Deactivated successfully.
Oct  2 08:02:21 np0005466030 systemd[1]: Stopped User Manager for UID 0.
Oct  2 08:02:21 np0005466030 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  2 08:02:22 np0005466030 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  2 08:02:22 np0005466030 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  2 08:02:22 np0005466030 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  2 08:02:22 np0005466030 systemd[1]: Removed slice User Slice of UID 0.
Oct  2 08:02:22 np0005466030 python3.9[204758]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406541.505878-1346-28758419086455/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:23 np0005466030 python3.9[204910]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:23 np0005466030 podman[204935]: 2025-10-02 12:02:23.858240444 +0000 UTC m=+0.108576442 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:02:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:23.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:24 np0005466030 python3.9[205089]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:02:24 np0005466030 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 08:02:24 np0005466030 systemd[1]: Stopped Load Kernel Modules.
Oct  2 08:02:24 np0005466030 systemd[1]: Stopping Load Kernel Modules...
Oct  2 08:02:24 np0005466030 systemd[1]: Starting Load Kernel Modules...
Oct  2 08:02:24 np0005466030 systemd[1]: Finished Load Kernel Modules.
Oct  2 08:02:25 np0005466030 python3.9[205245]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:25.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:25.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:02:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:02:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:02:25.903 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:02:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:02:25.903 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:02:26 np0005466030 python3.9[205397]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:27 np0005466030 python3.9[205549]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:27.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:27 np0005466030 python3.9[205701]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:27.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:28 np0005466030 python3.9[205824]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406547.398631-1520-128691218632933/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:29 np0005466030 podman[205948]: 2025-10-02 12:02:29.227024303 +0000 UTC m=+0.055020328 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct  2 08:02:29 np0005466030 python3.9[205993]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:02:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:29.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:29.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:30 np0005466030 python3.9[206148]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:31 np0005466030 python3.9[206300]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:31 np0005466030 python3.9[206452]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:32 np0005466030 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  2 08:02:32 np0005466030 python3.9[206605]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:33 np0005466030 python3.9[206757]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:33 np0005466030 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 08:02:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:33 np0005466030 python3.9[206910]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:33.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:34 np0005466030 python3.9[207062]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:35 np0005466030 python3.9[207214]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:02:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:35.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:35.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:36 np0005466030 python3.9[207368]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:37 np0005466030 python3.9[207520]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:37.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:37.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:38 np0005466030 python3.9[207672]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:38 np0005466030 python3.9[207750]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:39 np0005466030 python3.9[207902]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:39.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:39 np0005466030 python3.9[207980]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:39.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:40 np0005466030 python3.9[208132]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:41 np0005466030 python3.9[208284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:41 np0005466030 python3.9[208362]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:41 np0005466030 podman[208363]: 2025-10-02 12:02:41.82878057 +0000 UTC m=+0.073339855 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:02:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:42 np0005466030 python3.9[208536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:42 np0005466030 python3.9[208614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:43 np0005466030 python3.9[208766]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:02:43 np0005466030 systemd[1]: Reloading.
Oct  2 08:02:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:43.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:43 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:43 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:44 np0005466030 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  2 08:02:44 np0005466030 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  2 08:02:45 np0005466030 python3.9[208958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:45 np0005466030 python3.9[209036]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:46 np0005466030 python3.9[209188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:46 np0005466030 python3.9[209266]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:47 np0005466030 python3.9[209418]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:02:47 np0005466030 systemd[1]: Reloading.
Oct  2 08:02:47 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:02:47 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:02:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:47.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:48 np0005466030 systemd[1]: Starting Create netns directory...
Oct  2 08:02:48 np0005466030 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  2 08:02:48 np0005466030 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  2 08:02:48 np0005466030 systemd[1]: Finished Create netns directory.
Oct  2 08:02:49 np0005466030 python3.9[209613]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:49.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:50 np0005466030 python3.9[209765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:50 np0005466030 python3.9[209888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406569.5594912-2141-52551594554319/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:51 np0005466030 python3.9[210040]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:02:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:51.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:52 np0005466030 python3.9[210192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:02:53 np0005466030 python3.9[210315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406572.0418603-2216-109536775016369/.source.json _original_basename=.4srd60oy follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:53.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:53 np0005466030 python3.9[210467]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:02:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:53.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:54 np0005466030 podman[210591]: 2025-10-02 12:02:54.6735464 +0000 UTC m=+0.090087631 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:02:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:55.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:02:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:02:56 np0005466030 python3.9[210919]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  2 08:02:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:02:57 np0005466030 python3.9[211071]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:02:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:57.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:58 np0005466030 python3.9[211223]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  2 08:02:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:59.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:02:59 np0005466030 podman[211275]: 2025-10-02 12:02:59.794142784 +0000 UTC m=+0.052375637 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:02:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:02:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:02:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:00 np0005466030 python3[211422]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:03:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:01 np0005466030 podman[211434]: 2025-10-02 12:03:01.641594685 +0000 UTC m=+1.225001220 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 08:03:01 np0005466030 podman[211491]: 2025-10-02 12:03:01.768005416 +0000 UTC m=+0.040007668 container create a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:03:01 np0005466030 podman[211491]: 2025-10-02 12:03:01.745156308 +0000 UTC m=+0.017158580 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 08:03:01 np0005466030 python3[211422]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct  2 08:03:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:03 np0005466030 python3.9[211681]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:03:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:03 np0005466030 python3.9[211835]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:04 np0005466030 python3.9[211911]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:03:05 np0005466030 python3.9[212062]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406584.4633915-2480-180046824582250/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:05 np0005466030 python3.9[212138]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:03:05 np0005466030 systemd[1]: Reloading.
Oct  2 08:03:05 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:05 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:05.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:06 np0005466030 python3.9[212249]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:06 np0005466030 systemd[1]: Reloading.
Oct  2 08:03:06 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:06 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:06 np0005466030 systemd[1]: Starting multipathd container...
Oct  2 08:03:07 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:03:07 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:07 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:07 np0005466030 systemd[1]: Started /usr/bin/podman healthcheck run a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.
Oct  2 08:03:07 np0005466030 podman[212289]: 2025-10-02 12:03:07.056448823 +0000 UTC m=+0.099987313 container init a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  2 08:03:07 np0005466030 multipathd[212304]: + sudo -E kolla_set_configs
Oct  2 08:03:07 np0005466030 podman[212289]: 2025-10-02 12:03:07.093681852 +0000 UTC m=+0.137220372 container start a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:03:07 np0005466030 multipathd[212304]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:03:07 np0005466030 multipathd[212304]: INFO:__main__:Validating config file
Oct  2 08:03:07 np0005466030 multipathd[212304]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:03:07 np0005466030 multipathd[212304]: INFO:__main__:Writing out command to execute
Oct  2 08:03:07 np0005466030 multipathd[212304]: ++ cat /run_command
Oct  2 08:03:07 np0005466030 multipathd[212304]: + CMD='/usr/sbin/multipathd -d'
Oct  2 08:03:07 np0005466030 multipathd[212304]: + ARGS=
Oct  2 08:03:07 np0005466030 multipathd[212304]: + sudo kolla_copy_cacerts
Oct  2 08:03:07 np0005466030 podman[212289]: multipathd
Oct  2 08:03:07 np0005466030 systemd[1]: Started multipathd container.
Oct  2 08:03:07 np0005466030 multipathd[212304]: + [[ ! -n '' ]]
Oct  2 08:03:07 np0005466030 multipathd[212304]: + . kolla_extend_start
Oct  2 08:03:07 np0005466030 multipathd[212304]: Running command: '/usr/sbin/multipathd -d'
Oct  2 08:03:07 np0005466030 multipathd[212304]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 08:03:07 np0005466030 multipathd[212304]: + umask 0022
Oct  2 08:03:07 np0005466030 multipathd[212304]: + exec /usr/sbin/multipathd -d
Oct  2 08:03:07 np0005466030 multipathd[212304]: 4450.783103 | --------start up--------
Oct  2 08:03:07 np0005466030 multipathd[212304]: 4450.783123 | read /etc/multipath.conf
Oct  2 08:03:07 np0005466030 multipathd[212304]: 4450.788595 | path checkers start up
Oct  2 08:03:07 np0005466030 podman[212311]: 2025-10-02 12:03:07.198137023 +0000 UTC m=+0.094614913 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:03:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:07.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:08 np0005466030 python3.9[212492]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:03:09 np0005466030 python3.9[212760]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:03:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:10 np0005466030 python3.9[212942]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:03:10 np0005466030 systemd[1]: Stopping multipathd container...
Oct  2 08:03:10 np0005466030 multipathd[212304]: 4453.983908 | exit (signal)
Oct  2 08:03:10 np0005466030 multipathd[212304]: 4453.984515 | --------shut down-------
Oct  2 08:03:10 np0005466030 systemd[1]: libpod-a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.scope: Deactivated successfully.
Oct  2 08:03:10 np0005466030 podman[212946]: 2025-10-02 12:03:10.39787092 +0000 UTC m=+0.069624808 container died a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:03:10 np0005466030 systemd[1]: a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-665ada711ac93a87.timer: Deactivated successfully.
Oct  2 08:03:10 np0005466030 systemd[1]: Stopped /usr/bin/podman healthcheck run a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.
Oct  2 08:03:10 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:03:10 np0005466030 systemd[1]: var-lib-containers-storage-overlay-2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c-merged.mount: Deactivated successfully.
Oct  2 08:03:10 np0005466030 podman[212946]: 2025-10-02 12:03:10.545762946 +0000 UTC m=+0.217516824 container cleanup a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:03:10 np0005466030 podman[212946]: multipathd
Oct  2 08:03:10 np0005466030 podman[212973]: multipathd
Oct  2 08:03:10 np0005466030 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  2 08:03:10 np0005466030 systemd[1]: Stopped multipathd container.
Oct  2 08:03:10 np0005466030 systemd[1]: Starting multipathd container...
Oct  2 08:03:10 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:03:10 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:10 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:03:10 np0005466030 systemd[1]: Started /usr/bin/podman healthcheck run a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.
Oct  2 08:03:10 np0005466030 podman[212986]: 2025-10-02 12:03:10.740441381 +0000 UTC m=+0.103402149 container init a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Oct  2 08:03:10 np0005466030 multipathd[213002]: + sudo -E kolla_set_configs
Oct  2 08:03:10 np0005466030 podman[212986]: 2025-10-02 12:03:10.768582185 +0000 UTC m=+0.131542943 container start a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:03:10 np0005466030 podman[212986]: multipathd
Oct  2 08:03:10 np0005466030 systemd[1]: Started multipathd container.
Oct  2 08:03:10 np0005466030 multipathd[213002]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:03:10 np0005466030 multipathd[213002]: INFO:__main__:Validating config file
Oct  2 08:03:10 np0005466030 multipathd[213002]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:03:10 np0005466030 multipathd[213002]: INFO:__main__:Writing out command to execute
Oct  2 08:03:10 np0005466030 multipathd[213002]: ++ cat /run_command
Oct  2 08:03:10 np0005466030 multipathd[213002]: + CMD='/usr/sbin/multipathd -d'
Oct  2 08:03:10 np0005466030 multipathd[213002]: + ARGS=
Oct  2 08:03:10 np0005466030 multipathd[213002]: + sudo kolla_copy_cacerts
Oct  2 08:03:10 np0005466030 multipathd[213002]: + [[ ! -n '' ]]
Oct  2 08:03:10 np0005466030 multipathd[213002]: + . kolla_extend_start
Oct  2 08:03:10 np0005466030 multipathd[213002]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  2 08:03:10 np0005466030 multipathd[213002]: Running command: '/usr/sbin/multipathd -d'
Oct  2 08:03:10 np0005466030 multipathd[213002]: + umask 0022
Oct  2 08:03:10 np0005466030 multipathd[213002]: + exec /usr/sbin/multipathd -d
Oct  2 08:03:10 np0005466030 podman[213009]: 2025-10-02 12:03:10.843340373 +0000 UTC m=+0.063870098 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 08:03:10 np0005466030 systemd[1]: a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-aef7d2fc3870334.service: Main process exited, code=exited, status=1/FAILURE
Oct  2 08:03:10 np0005466030 systemd[1]: a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-aef7d2fc3870334.service: Failed with result 'exit-code'.
Oct  2 08:03:10 np0005466030 multipathd[213002]: 4454.469036 | --------start up--------
Oct  2 08:03:10 np0005466030 multipathd[213002]: 4454.469050 | read /etc/multipath.conf
Oct  2 08:03:10 np0005466030 multipathd[213002]: 4454.474146 | path checkers start up
Oct  2 08:03:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:11 np0005466030 python3.9[213193]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:03:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:03:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:12 np0005466030 podman[213317]: 2025-10-02 12:03:12.530018394 +0000 UTC m=+0.052771179 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:03:12 np0005466030 python3.9[213361]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  2 08:03:13 np0005466030 python3.9[213514]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  2 08:03:13 np0005466030 kernel: Key type psk registered
Oct  2 08:03:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:14 np0005466030 python3.9[213675]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:03:14 np0005466030 python3.9[213798]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406593.7787657-2720-212600043158228/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:15.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:15 np0005466030 python3.9[213950]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:15.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:16 np0005466030 python3.9[214102]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:03:16 np0005466030 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  2 08:03:16 np0005466030 systemd[1]: Stopped Load Kernel Modules.
Oct  2 08:03:16 np0005466030 systemd[1]: Stopping Load Kernel Modules...
Oct  2 08:03:16 np0005466030 systemd[1]: Starting Load Kernel Modules...
Oct  2 08:03:16 np0005466030 systemd[1]: Finished Load Kernel Modules.
Oct  2 08:03:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:17.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:17 np0005466030 python3.9[214258]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  2 08:03:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:18 np0005466030 python3.9[214392]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  2 08:03:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:03:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:19.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:19.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:21.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:21.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:23.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:23.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:24 np0005466030 podman[214397]: 2025-10-02 12:03:24.846063875 +0000 UTC m=+0.103965887 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:03:25 np0005466030 systemd[1]: Reloading.
Oct  2 08:03:25 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:25 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:25 np0005466030 systemd[1]: Reloading.
Oct  2 08:03:25 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:25 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:25.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:03:25.903 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:03:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:03:25.904 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:03:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:03:25.904 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:03:25 np0005466030 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  2 08:03:25 np0005466030 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  2 08:03:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:25.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:26 np0005466030 lvm[214534]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 08:03:26 np0005466030 lvm[214534]: VG ceph_vg0 finished
Oct  2 08:03:26 np0005466030 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  2 08:03:26 np0005466030 systemd[1]: Starting man-db-cache-update.service...
Oct  2 08:03:26 np0005466030 systemd[1]: Reloading.
Oct  2 08:03:26 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:26 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:26 np0005466030 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  2 08:03:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:27 np0005466030 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  2 08:03:27 np0005466030 systemd[1]: Finished man-db-cache-update.service.
Oct  2 08:03:27 np0005466030 systemd[1]: man-db-cache-update.service: Consumed 1.579s CPU time.
Oct  2 08:03:27 np0005466030 systemd[1]: run-r05d40a3efd9e43b8964a6196b6ca804e.service: Deactivated successfully.
Oct  2 08:03:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:27 np0005466030 python3.9[215871]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:27.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:28 np0005466030 python3.9[216021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  2 08:03:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:29 np0005466030 python3.9[216177]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:29.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:30 np0005466030 podman[216254]: 2025-10-02 12:03:30.815396129 +0000 UTC m=+0.063393542 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:03:31 np0005466030 python3.9[216346]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:03:31 np0005466030 systemd[1]: Reloading.
Oct  2 08:03:31 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:03:31 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:03:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:31.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:31.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:32 np0005466030 python3.9[216531]: ansible-ansible.builtin.service_facts Invoked
Oct  2 08:03:32 np0005466030 network[216548]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  2 08:03:32 np0005466030 network[216549]: 'network-scripts' will be removed from distribution in near future.
Oct  2 08:03:32 np0005466030 network[216550]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  2 08:03:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:33.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:35.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:36.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:37 np0005466030 python3.9[216828]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:38 np0005466030 python3.9[216981]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:39 np0005466030 python3.9[217134]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:03:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:03:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:40 np0005466030 python3.9[217287]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:40 np0005466030 python3.9[217440]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:41 np0005466030 podman[217442]: 2025-10-02 12:03:41.086006671 +0000 UTC m=+0.067690737 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 08:03:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:41 np0005466030 python3.9[217613]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:42.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:42 np0005466030 python3.9[217766]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:42 np0005466030 podman[217768]: 2025-10-02 12:03:42.687101624 +0000 UTC m=+0.061509634 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 08:03:43 np0005466030 python3.9[217939]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:03:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:44 np0005466030 python3.9[218092]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.761934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624762036, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1323, "num_deletes": 255, "total_data_size": 3127314, "memory_usage": 3172824, "flush_reason": "Manual Compaction"}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624772174, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2055989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16058, "largest_seqno": 17376, "table_properties": {"data_size": 2050299, "index_size": 3085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11336, "raw_average_key_size": 18, "raw_value_size": 2038910, "raw_average_value_size": 3353, "num_data_blocks": 140, "num_entries": 608, "num_filter_entries": 608, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406505, "oldest_key_time": 1759406505, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 10282 microseconds, and 5763 cpu microseconds.
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.772226) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2055989 bytes OK
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.772247) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774115) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774131) EVENT_LOG_v1 {"time_micros": 1759406624774126, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3121062, prev total WAL file size 3121062, number of live WAL files 2.
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.775060) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2007KB)], [30(7712KB)]
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624775129, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9953463, "oldest_snapshot_seqno": -1}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4233 keys, 9583961 bytes, temperature: kUnknown
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624839407, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9583961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9552969, "index_size": 19298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 105447, "raw_average_key_size": 24, "raw_value_size": 9473577, "raw_average_value_size": 2238, "num_data_blocks": 805, "num_entries": 4233, "num_filter_entries": 4233, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.839696) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9583961 bytes
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.841108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.7 rd, 148.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 4758, records dropped: 525 output_compression: NoCompression
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.841123) EVENT_LOG_v1 {"time_micros": 1759406624841116, "job": 16, "event": "compaction_finished", "compaction_time_micros": 64361, "compaction_time_cpu_micros": 34846, "output_level": 6, "num_output_files": 1, "total_output_size": 9583961, "num_input_records": 4758, "num_output_records": 4233, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624841570, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624842608, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:45 np0005466030 python3.9[218244]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:45.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:45 np0005466030 python3.9[218396]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:46 np0005466030 python3.9[218548]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:47 np0005466030 python3.9[218700]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:47.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:47 np0005466030 python3.9[218852]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:48 np0005466030 python3.9[219004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:48 np0005466030 python3.9[219156]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:49.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:50.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:50 np0005466030 python3.9[219308]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:51 np0005466030 python3.9[219460]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:51.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:51 np0005466030 python3.9[219612]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:03:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:52.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:53.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:54.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.458097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635458528, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 354, "num_deletes": 251, "total_data_size": 310838, "memory_usage": 318808, "flush_reason": "Manual Compaction"}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635462105, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 204944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17381, "largest_seqno": 17730, "table_properties": {"data_size": 202801, "index_size": 307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5354, "raw_average_key_size": 18, "raw_value_size": 198624, "raw_average_value_size": 684, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406625, "oldest_key_time": 1759406625, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 4012 microseconds, and 1511 cpu microseconds.
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.462142) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 204944 bytes OK
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.462159) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464068) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464083) EVENT_LOG_v1 {"time_micros": 1759406635464078, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464105) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 308433, prev total WAL file size 308433, number of live WAL files 2.
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.465031) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(200KB)], [33(9359KB)]
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635465088, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 9788905, "oldest_snapshot_seqno": -1}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4013 keys, 7761775 bytes, temperature: kUnknown
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635509518, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7761775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7733892, "index_size": 16765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 101547, "raw_average_key_size": 25, "raw_value_size": 7659904, "raw_average_value_size": 1908, "num_data_blocks": 691, "num_entries": 4013, "num_filter_entries": 4013, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.509763) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7761775 bytes
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.511036) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.0 rd, 174.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(85.6) write-amplify(37.9) OK, records in: 4523, records dropped: 510 output_compression: NoCompression
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.511057) EVENT_LOG_v1 {"time_micros": 1759406635511047, "job": 18, "event": "compaction_finished", "compaction_time_micros": 44496, "compaction_time_cpu_micros": 16029, "output_level": 6, "num_output_files": 1, "total_output_size": 7761775, "num_input_records": 4523, "num_output_records": 4013, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635511226, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635513047, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:03:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:55.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:55 np0005466030 podman[219637]: 2025-10-02 12:03:55.852032483 +0000 UTC m=+0.097991260 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:03:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:03:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:03:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:57.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:03:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:58.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:03:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:03:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:03:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:59.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:00.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:01 np0005466030 podman[219665]: 2025-10-02 12:04:01.55502999 +0000 UTC m=+0.063628381 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:04:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:01.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:03 np0005466030 python3.9[219811]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:03.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:03 np0005466030 python3.9[219963]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:04.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:04 np0005466030 python3.9[220115]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:05 np0005466030 python3.9[220267]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:05.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:05 np0005466030 python3.9[220419]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:06.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:07 np0005466030 python3.9[220571]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:07.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:08 np0005466030 python3.9[220723]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  2 08:04:09 np0005466030 python3.9[220875]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:04:09 np0005466030 systemd[1]: Reloading.
Oct  2 08:04:09 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:04:09 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:04:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:09.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:10.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:10 np0005466030 python3.9[221062]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:11 np0005466030 podman[221187]: 2025-10-02 12:04:11.219687657 +0000 UTC m=+0.066333675 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:04:11 np0005466030 python3.9[221232]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:11.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:12 np0005466030 python3.9[221388]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:12.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:12 np0005466030 python3.9[221541]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:12 np0005466030 podman[221567]: 2025-10-02 12:04:12.799361084 +0000 UTC m=+0.054465782 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:04:13 np0005466030 python3.9[221713]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:13.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:13 np0005466030 python3.9[221866]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:14.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:14 np0005466030 python3.9[222019]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:15 np0005466030 python3.9[222172]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  2 08:04:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:15.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:16.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:18.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:19 np0005466030 python3.9[222437]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:19 np0005466030 python3.9[222606]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:04:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:04:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:04:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:04:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:20.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:04:20 np0005466030 python3.9[222758]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:21 np0005466030 python3.9[222910]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:21.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:22.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:22 np0005466030 python3.9[223062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:22 np0005466030 python3.9[223214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:23 np0005466030 python3.9[223366]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:04:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:04:23 np0005466030 python3.9[223518]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:24.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:24 np0005466030 python3.9[223670]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:25 np0005466030 python3.9[223822]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:25 np0005466030 python3.9[223974]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:25.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:04:25.904 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:04:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:04:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:04:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:04:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:04:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:26 np0005466030 podman[224121]: 2025-10-02 12:04:26.201247261 +0000 UTC m=+0.097917018 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:04:26 np0005466030 python3.9[224193]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:04:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:04:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:27.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:28.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:29.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:31 np0005466030 podman[224226]: 2025-10-02 12:04:31.799510457 +0000 UTC m=+0.057814697 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:04:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:31.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:32.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:33 np0005466030 python3.9[224372]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  2 08:04:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:33.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:34 np0005466030 python3.9[224525]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  2 08:04:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:35 np0005466030 python3.9[224683]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  2 08:04:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:35.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:36.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:36 np0005466030 systemd-logind[795]: New session 52 of user zuul.
Oct  2 08:04:36 np0005466030 systemd[1]: Started Session 52 of User zuul.
Oct  2 08:04:36 np0005466030 systemd[1]: session-52.scope: Deactivated successfully.
Oct  2 08:04:36 np0005466030 systemd-logind[795]: Session 52 logged out. Waiting for processes to exit.
Oct  2 08:04:36 np0005466030 systemd-logind[795]: Removed session 52.
Oct  2 08:04:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:36 np0005466030 python3.9[224869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:37 np0005466030 python3.9[224990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406676.5179965-4357-183356514533463/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:37.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:38 np0005466030 python3.9[225140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:38 np0005466030 python3.9[225216]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:39 np0005466030 python3.9[225366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:39 np0005466030 python3.9[225487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406678.7337186-4357-233347205498610/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:39.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:40 np0005466030 python3.9[225637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:40 np0005466030 python3.9[225758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406679.926514-4357-243549155125228/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:41 np0005466030 podman[225882]: 2025-10-02 12:04:41.455225594 +0000 UTC m=+0.078337272 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:04:41 np0005466030 python3.9[225919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:41.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:42.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:42 np0005466030 python3.9[226048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406681.1348033-4357-192622436252957/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:43 np0005466030 podman[226172]: 2025-10-02 12:04:43.351236769 +0000 UTC m=+0.053562234 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:04:43 np0005466030 python3.9[226218]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:43.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:44 np0005466030 python3.9[226370]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:04:45 np0005466030 python3.9[226522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:04:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:45.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:46 np0005466030 python3.9[226674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:46 np0005466030 python3.9[226797]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759406685.4838111-4636-31294943551518/.source _original_basename=.84wwjf3m follow=False checksum=62bef941a60b6885f43fe43facda790502381dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  2 08:04:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:47 np0005466030 python3.9[226949]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:04:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:47.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:48.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:48 np0005466030 python3.9[227101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:48 np0005466030 python3.9[227222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406687.763834-4714-245288903738854/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:49 np0005466030 python3.9[227372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  2 08:04:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:49.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:50.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:50 np0005466030 python3.9[227493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406689.2430182-4759-239984502605595/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  2 08:04:51 np0005466030 python3.9[227645]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  2 08:04:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:52 np0005466030 python3.9[227797]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:04:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:52 np0005466030 python3[227949]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:04:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:53.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:54.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:55.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:56.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:04:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:04:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:57.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:04:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:04:58 np0005466030 podman[228002]: 2025-10-02 12:04:58.615286729 +0000 UTC m=+1.863103263 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:04:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:04:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:04:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:02.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:02 np0005466030 podman[228050]: 2025-10-02 12:05:02.948065747 +0000 UTC m=+0.205637670 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:05:03 np0005466030 podman[227963]: 2025-10-02 12:05:03.135753964 +0000 UTC m=+10.076548458 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 08:05:03 np0005466030 podman[228092]: 2025-10-02 12:05:03.265196391 +0000 UTC m=+0.022581140 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 08:05:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:03 np0005466030 podman[228092]: 2025-10-02 12:05:03.941732665 +0000 UTC m=+0.699117414 container create 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:05:03 np0005466030 python3[227949]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  2 08:05:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:04 np0005466030 python3.9[228282]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:05 np0005466030 python3.9[228436]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  2 08:05:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:06.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:06 np0005466030 python3.9[228588]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  2 08:05:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:07 np0005466030 python3[228740]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  2 08:05:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:08 np0005466030 podman[228778]: 2025-10-02 12:05:08.092197227 +0000 UTC m=+0.065200169 container create 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:05:08 np0005466030 podman[228778]: 2025-10-02 12:05:08.056727253 +0000 UTC m=+0.029730195 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct  2 08:05:08 np0005466030 python3[228740]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct  2 08:05:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:08.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:10 np0005466030 python3.9[228968]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:11 np0005466030 python3.9[229122]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:05:11 np0005466030 podman[229245]: 2025-10-02 12:05:11.735180446 +0000 UTC m=+0.061625547 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:05:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:11.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:11 np0005466030 python3.9[229292]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406711.2994633-5035-126091902027174/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  2 08:05:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:12 np0005466030 python3.9[229370]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  2 08:05:12 np0005466030 systemd[1]: Reloading.
Oct  2 08:05:12 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:05:12 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:05:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:13 np0005466030 python3.9[229481]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  2 08:05:13 np0005466030 systemd[1]: Reloading.
Oct  2 08:05:13 np0005466030 podman[229483]: 2025-10-02 12:05:13.54678091 +0000 UTC m=+0.066707467 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:05:13 np0005466030 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  2 08:05:13 np0005466030 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  2 08:05:13 np0005466030 systemd[1]: Starting nova_compute container...
Oct  2 08:05:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:13 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:05:13 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:13 np0005466030 podman[229539]: 2025-10-02 12:05:13.959577809 +0000 UTC m=+0.105803485 container init 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:05:13 np0005466030 podman[229539]: 2025-10-02 12:05:13.964997888 +0000 UTC m=+0.111223544 container start 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:05:13 np0005466030 podman[229539]: nova_compute
Oct  2 08:05:13 np0005466030 nova_compute[229555]: + sudo -E kolla_set_configs
Oct  2 08:05:13 np0005466030 systemd[1]: Started nova_compute container.
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Validating config file
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying service configuration files
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Deleting /etc/ceph
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Creating directory /etc/ceph
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Writing out command to execute
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:14 np0005466030 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:14 np0005466030 nova_compute[229555]: ++ cat /run_command
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + CMD=nova-compute
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + ARGS=
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + sudo kolla_copy_cacerts
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + [[ ! -n '' ]]
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + . kolla_extend_start
Oct  2 08:05:14 np0005466030 nova_compute[229555]: Running command: 'nova-compute'
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + umask 0022
Oct  2 08:05:14 np0005466030 nova_compute[229555]: + exec nova-compute
Oct  2 08:05:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:15 np0005466030 python3.9[229717]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:15.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:16.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:16 np0005466030 nova_compute[229555]: 2025-10-02 12:05:16.480 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:16 np0005466030 nova_compute[229555]: 2025-10-02 12:05:16.480 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:16 np0005466030 nova_compute[229555]: 2025-10-02 12:05:16.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:16 np0005466030 nova_compute[229555]: 2025-10-02 12:05:16.481 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 08:05:16 np0005466030 python3.9[229869]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:16 np0005466030 nova_compute[229555]: 2025-10-02 12:05:16.740 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:16 np0005466030 nova_compute[229555]: 2025-10-02 12:05:16.759 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.414 2 INFO nova.virt.driver [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.547 2 INFO nova.compute.provider_config [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.563 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.564 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.564 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.564 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.576 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.577 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.577 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 python3.9[230021]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.640 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.640 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.640 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.643 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.643 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.643 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.644 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.644 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.644 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.648 2 WARNING oslo_config.cfg [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 08:05:17 np0005466030 nova_compute[229555]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 08:05:17 np0005466030 nova_compute[229555]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 08:05:17 np0005466030 nova_compute[229555]: and ``live_migration_inbound_addr`` respectively.
Oct  2 08:05:17 np0005466030 nova_compute[229555]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_secret_uuid        = 20fdc58c-b037-5094-a8ef-d490aa7c36f3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.715 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.732 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.733 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.733 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.733 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 08:05:17 np0005466030 systemd[1]: Starting libvirt QEMU daemon...
Oct  2 08:05:17 np0005466030 systemd[1]: Started libvirt QEMU daemon.
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.809 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f03552825b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.812 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f03552825b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.812 2 INFO nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.836 2 WARNING nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  2 08:05:17 np0005466030 nova_compute[229555]: 2025-10-02 12:05:17.836 2 DEBUG nova.virt.libvirt.volume.mount [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 08:05:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:18.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:18 np0005466030 python3.9[230233]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.592 2 INFO nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host capabilities <capabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 
Oct  2 08:05:18 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <host>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <uuid>5d5cabb1-2c53-462b-89f3-16d4280c3e4c</uuid>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <arch>x86_64</arch>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model>EPYC-Rome-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <vendor>AMD</vendor>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <microcode version='16777317'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <signature family='23' model='49' stepping='0'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='x2apic'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='tsc-deadline'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='osxsave'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='hypervisor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='tsc_adjust'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='spec-ctrl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='stibp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='arch-capabilities'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='cmp_legacy'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='topoext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='virt-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='lbrv'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='tsc-scale'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='vmcb-clean'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='pause-filter'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='pfthreshold'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='svme-addr-chk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='rdctl-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='skip-l1dfl-vmentry'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='mds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature name='pschange-mc-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <pages unit='KiB' size='4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <pages unit='KiB' size='2048'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <pages unit='KiB' size='1048576'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <power_management>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <suspend_mem/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </power_management>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <iommu support='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <migration_features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <live/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <uri_transports>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <uri_transport>tcp</uri_transport>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <uri_transport>rdma</uri_transport>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </uri_transports>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </migration_features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <topology>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <cells num='1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <cell id='0'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          <memory unit='KiB'>7864104</memory>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          <pages unit='KiB' size='2048'>0</pages>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          <distances>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <sibling id='0' value='10'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          </distances>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          <cpus num='8'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:          </cpus>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        </cell>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </cells>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </topology>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <cache>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </cache>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <secmodel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model>selinux</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <doi>0</doi>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </secmodel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <secmodel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model>dac</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <doi>0</doi>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </secmodel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </host>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <guest>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <os_type>hvm</os_type>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <arch name='i686'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <wordsize>32</wordsize>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <domain type='qemu'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <domain type='kvm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </arch>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <pae/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <nonpae/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <apic default='on' toggle='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <cpuselection/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <deviceboot/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <externalSnapshot/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </guest>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <guest>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <os_type>hvm</os_type>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <arch name='x86_64'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <wordsize>64</wordsize>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <domain type='qemu'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <domain type='kvm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </arch>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <apic default='on' toggle='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <cpuselection/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <deviceboot/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <externalSnapshot/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </guest>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 
Oct  2 08:05:18 np0005466030 nova_compute[229555]: </capabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: #033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.600 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.628 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  2 08:05:18 np0005466030 nova_compute[229555]: <domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <domain>kvm</domain>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <arch>i686</arch>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <vcpu max='240'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <iothreads supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <os supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='firmware'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <loader supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>rom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pflash</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='readonly'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>yes</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='secure'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </loader>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </os>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='maximumMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <vendor>AMD</vendor>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='succor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='custom' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-128'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-256'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-512'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <memoryBacking supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='sourceType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>file</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>anonymous</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>memfd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </memoryBacking>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <disk supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='diskDevice'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>disk</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cdrom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>floppy</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>lun</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ide</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>fdc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>sata</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </disk>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <graphics supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vnc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egl-headless</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>dbus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </graphics>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <video supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='modelType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vga</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cirrus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>none</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>bochs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ramfb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </video>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hostdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='mode'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>subsystem</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='startupPolicy'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>mandatory</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>requisite</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>optional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='subsysType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pci</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='capsType'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='pciBackend'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hostdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <rng supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>random</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </rng>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <filesystem supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='driverType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>path</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>handle</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtiofs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </filesystem>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <tpm supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-tis</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-crb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emulator</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>external</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendVersion'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>2.0</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </tpm>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <redirdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </redirdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <channel supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pty</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>unix</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </channel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <crypto supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>qemu</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </crypto>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <interface supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>passt</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </interface>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <panic supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>isa</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>hyperv</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </panic>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <gic supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <genid supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backup supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <async-teardown supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <ps2 supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sev supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sgx supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hyperv supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='features'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>relaxed</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vapic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>spinlocks</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vpindex</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>runtime</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>synic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>stimer</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reset</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vendor_id</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>frequencies</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reenlightenment</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tlbflush</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ipi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>avic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emsr_bitmap</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>xmm_input</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hyperv>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <launchSecurity supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: </domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.635 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  2 08:05:18 np0005466030 nova_compute[229555]: <domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <domain>kvm</domain>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <arch>i686</arch>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <vcpu max='4096'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <iothreads supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <os supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='firmware'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <loader supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>rom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pflash</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='readonly'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>yes</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='secure'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </loader>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </os>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='maximumMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <vendor>AMD</vendor>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='succor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='custom' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-128'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-256'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-512'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <memoryBacking supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='sourceType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>file</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>anonymous</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>memfd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </memoryBacking>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <disk supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='diskDevice'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>disk</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cdrom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>floppy</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>lun</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>fdc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>sata</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </disk>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <graphics supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vnc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egl-headless</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>dbus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </graphics>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <video supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='modelType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vga</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cirrus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>none</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>bochs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ramfb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </video>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hostdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='mode'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>subsystem</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='startupPolicy'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>mandatory</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>requisite</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>optional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='subsysType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pci</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='capsType'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='pciBackend'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hostdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <rng supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>random</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </rng>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <filesystem supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='driverType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>path</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>handle</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtiofs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </filesystem>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <tpm supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-tis</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-crb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emulator</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>external</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendVersion'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>2.0</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </tpm>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <redirdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </redirdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <channel supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pty</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>unix</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </channel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <crypto supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>qemu</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </crypto>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <interface supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>passt</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </interface>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <panic supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>isa</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>hyperv</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </panic>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <gic supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <genid supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backup supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <async-teardown supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <ps2 supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sev supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sgx supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hyperv supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='features'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>relaxed</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vapic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>spinlocks</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vpindex</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>runtime</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>synic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>stimer</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reset</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vendor_id</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>frequencies</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reenlightenment</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tlbflush</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ipi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>avic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emsr_bitmap</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>xmm_input</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hyperv>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <launchSecurity supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: </domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.662 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.666 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  2 08:05:18 np0005466030 nova_compute[229555]: <domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <domain>kvm</domain>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <arch>x86_64</arch>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <vcpu max='240'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <iothreads supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <os supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='firmware'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <loader supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>rom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pflash</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='readonly'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>yes</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='secure'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </loader>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </os>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='maximumMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <vendor>AMD</vendor>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='succor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='custom' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-128'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-256'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-512'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <memoryBacking supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='sourceType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>file</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>anonymous</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>memfd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </memoryBacking>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <disk supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='diskDevice'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>disk</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cdrom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>floppy</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>lun</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ide</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>fdc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>sata</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </disk>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <graphics supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vnc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egl-headless</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>dbus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </graphics>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <video supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='modelType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vga</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cirrus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>none</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>bochs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ramfb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </video>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hostdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='mode'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>subsystem</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='startupPolicy'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>mandatory</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>requisite</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>optional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='subsysType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pci</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='capsType'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='pciBackend'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hostdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <rng supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>random</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </rng>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <filesystem supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='driverType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>path</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>handle</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtiofs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </filesystem>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <tpm supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-tis</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-crb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emulator</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>external</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendVersion'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>2.0</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </tpm>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <redirdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </redirdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <channel supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pty</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>unix</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </channel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <crypto supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>qemu</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </crypto>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <interface supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>passt</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </interface>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <panic supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>isa</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>hyperv</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </panic>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <gic supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <genid supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backup supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <async-teardown supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <ps2 supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sev supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sgx supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hyperv supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='features'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>relaxed</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vapic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>spinlocks</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vpindex</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>runtime</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>synic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>stimer</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reset</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vendor_id</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>frequencies</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reenlightenment</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tlbflush</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ipi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>avic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emsr_bitmap</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>xmm_input</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hyperv>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <launchSecurity supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: </domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.722 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  2 08:05:18 np0005466030 nova_compute[229555]: <domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <domain>kvm</domain>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <arch>x86_64</arch>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <vcpu max='4096'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <iothreads supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <os supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='firmware'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>efi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <loader supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>rom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pflash</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='readonly'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>yes</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='secure'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>yes</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>no</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </loader>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </os>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='maximumMigratable'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>on</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>off</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <vendor>AMD</vendor>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='succor'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <mode name='custom' supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Denverton-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='auto-ibrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amd-psfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='stibp-always-on'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='EPYC-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-128'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-256'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx10-512'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='prefetchiti'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Haswell-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512er'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512pf'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fma4'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tbm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xop'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='amx-tile'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-bf16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-fp16'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bitalg'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrc'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fzrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='la57'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='taa-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xfd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ifma'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cmpccxadd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fbsdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='fsrs'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ibrs-all'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mcdt-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pbrsb-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='psdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='serialize'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vaes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='hle'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='rtm'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512bw'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512cd'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512dq'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512f'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='avx512vl'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='invpcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pcid'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='pku'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='mpx'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='core-capability'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='split-lock-detect'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='cldemote'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='erms'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='gfni'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdir64b'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='movdiri'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='xsaves'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='athlon-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='core2duo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='coreduo-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='n270-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='ss'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <blockers model='phenom-v1'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnow'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <feature name='3dnowext'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </blockers>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </mode>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <memoryBacking supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <enum name='sourceType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>file</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>anonymous</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <value>memfd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </memoryBacking>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <disk supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='diskDevice'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>disk</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cdrom</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>floppy</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>lun</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>fdc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>sata</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </disk>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <graphics supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vnc</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egl-headless</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>dbus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </graphics>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <video supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='modelType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vga</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>cirrus</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>none</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>bochs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ramfb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </video>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hostdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='mode'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>subsystem</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='startupPolicy'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>mandatory</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>requisite</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>optional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='subsysType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pci</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>scsi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='capsType'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='pciBackend'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hostdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <rng supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtio-non-transitional</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>random</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>egd</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </rng>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <filesystem supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='driverType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>path</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>handle</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>virtiofs</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </filesystem>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <tpm supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-tis</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tpm-crb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emulator</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>external</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendVersion'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>2.0</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </tpm>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <redirdev supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='bus'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>usb</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </redirdev>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <channel supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>pty</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>unix</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </channel>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <crypto supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='type'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>qemu</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendModel'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>builtin</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </crypto>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <interface supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='backendType'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>default</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>passt</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </interface>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <panic supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='model'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>isa</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>hyperv</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </panic>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </devices>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <gic supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <genid supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <backup supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <async-teardown supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <ps2 supported='yes'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sev supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <sgx supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <hyperv supported='yes'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      <enum name='features'>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>relaxed</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vapic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>spinlocks</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vpindex</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>runtime</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>synic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>stimer</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reset</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>vendor_id</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>frequencies</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>reenlightenment</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>tlbflush</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>ipi</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>avic</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>emsr_bitmap</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:        <value>xmm_input</value>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:      </enum>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    </hyperv>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:    <launchSecurity supported='no'/>
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  </features>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: </domainCapabilities>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.779 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.779 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.779 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.779 2 INFO nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Secure Boot support detected#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.781 2 INFO nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.781 2 INFO nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.792 2 DEBUG nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] cpu compare xml: <cpu match="exact">
Oct  2 08:05:18 np0005466030 nova_compute[229555]:  <model>Nehalem</model>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: </cpu>
Oct  2 08:05:18 np0005466030 nova_compute[229555]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.794 2 DEBUG nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.826 2 INFO nova.virt.node [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Determined node identity 730da6ce-9754-46f0-88e3-0019d056443f from /var/lib/nova/compute_id#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.851 2 WARNING nova.compute.manager [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Compute nodes ['730da6ce-9754-46f0-88e3-0019d056443f'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.885 2 INFO nova.compute.manager [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.920 2 WARNING nova.compute.manager [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.921 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.921 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.921 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.922 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:05:18 np0005466030 nova_compute[229555]: 2025-10-02 12:05:18.922 2 DEBUG oslo_concurrency.processutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3611304547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.371 2 DEBUG oslo_concurrency.processutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:19 np0005466030 systemd[1]: Starting libvirt nodedev daemon...
Oct  2 08:05:19 np0005466030 systemd[1]: Started libvirt nodedev daemon.
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.715 2 WARNING nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.716 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5264MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.716 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.717 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:19 np0005466030 python3.9[230433]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.735 2 WARNING nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] No compute node record for compute-1.ctlplane.example.com:730da6ce-9754-46f0-88e3-0019d056443f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 730da6ce-9754-46f0-88e3-0019d056443f could not be found.#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.759 2 INFO nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 730da6ce-9754-46f0-88e3-0019d056443f#033[00m
Oct  2 08:05:19 np0005466030 systemd[1]: Stopping nova_compute container...
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.847 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.848 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:05:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.981 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.982 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.982 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:19 np0005466030 nova_compute[229555]: 2025-10-02 12:05:19.982 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:20.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:20 np0005466030 virtqemud[230067]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  2 08:05:20 np0005466030 virtqemud[230067]: hostname: compute-1
Oct  2 08:05:20 np0005466030 virtqemud[230067]: End of file while reading data: Input/output error
Oct  2 08:05:20 np0005466030 systemd[1]: libpod-57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10.scope: Deactivated successfully.
Oct  2 08:05:20 np0005466030 systemd[1]: libpod-57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10.scope: Consumed 3.995s CPU time.
Oct  2 08:05:20 np0005466030 podman[230461]: 2025-10-02 12:05:20.402228422 +0000 UTC m=+0.603316495 container died 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Oct  2 08:05:20 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10-userdata-shm.mount: Deactivated successfully.
Oct  2 08:05:20 np0005466030 systemd[1]: var-lib-containers-storage-overlay-682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc-merged.mount: Deactivated successfully.
Oct  2 08:05:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:21.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:22.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:23 np0005466030 podman[230461]: 2025-10-02 12:05:23.518834255 +0000 UTC m=+3.719922338 container cleanup 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Oct  2 08:05:23 np0005466030 podman[230461]: nova_compute
Oct  2 08:05:23 np0005466030 podman[230490]: nova_compute
Oct  2 08:05:23 np0005466030 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  2 08:05:23 np0005466030 systemd[1]: Stopped nova_compute container.
Oct  2 08:05:23 np0005466030 systemd[1]: Starting nova_compute container...
Oct  2 08:05:23 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:05:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:23 np0005466030 podman[230502]: 2025-10-02 12:05:23.716987309 +0000 UTC m=+0.096079929 container init 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:05:23 np0005466030 podman[230502]: 2025-10-02 12:05:23.722782801 +0000 UTC m=+0.101875381 container start 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + sudo -E kolla_set_configs
Oct  2 08:05:23 np0005466030 podman[230502]: nova_compute
Oct  2 08:05:23 np0005466030 systemd[1]: Started nova_compute container.
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Validating config file
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying service configuration files
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /etc/ceph
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Creating directory /etc/ceph
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Writing out command to execute
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:23 np0005466030 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  2 08:05:23 np0005466030 nova_compute[230518]: ++ cat /run_command
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + CMD=nova-compute
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + ARGS=
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + sudo kolla_copy_cacerts
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + [[ ! -n '' ]]
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + . kolla_extend_start
Oct  2 08:05:23 np0005466030 nova_compute[230518]: Running command: 'nova-compute'
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + echo 'Running command: '\''nova-compute'\'''
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + umask 0022
Oct  2 08:05:23 np0005466030 nova_compute[230518]: + exec nova-compute
Oct  2 08:05:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:23.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:25.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:25 np0005466030 nova_compute[230518]: 2025-10-02 12:05:25.886 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:25 np0005466030 nova_compute[230518]: 2025-10-02 12:05:25.886 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:25 np0005466030 nova_compute[230518]: 2025-10-02 12:05:25.886 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  2 08:05:25 np0005466030 nova_compute[230518]: 2025-10-02 12:05:25.887 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  2 08:05:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:05:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:05:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:05:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.020 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.042 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.466 2 INFO nova.virt.driver [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.562 2 INFO nova.compute.provider_config [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.569 2 DEBUG oslo_concurrency.lockutils [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.569 2 DEBUG oslo_concurrency.lockutils [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_concurrency.lockutils [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.605 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.658 2 WARNING oslo_config.cfg [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  2 08:05:26 np0005466030 nova_compute[230518]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  2 08:05:26 np0005466030 nova_compute[230518]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  2 08:05:26 np0005466030 nova_compute[230518]: and ``live_migration_inbound_addr`` respectively.
Oct  2 08:05:26 np0005466030 nova_compute[230518]: ).  Its value may be silently ignored in the future.#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_secret_uuid        = 20fdc58c-b037-5094-a8ef-d490aa7c36f3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.724 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.739 2 INFO nova.virt.node [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Determined node identity 730da6ce-9754-46f0-88e3-0019d056443f from /var/lib/nova/compute_id#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.740 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.741 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.741 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.741 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.751 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0adf050b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.753 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0adf050b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.754 2 INFO nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.766 2 DEBUG nova.virt.libvirt.volume.mount [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.768 2 INFO nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host capabilities <capabilities>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <host>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <uuid>5d5cabb1-2c53-462b-89f3-16d4280c3e4c</uuid>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <arch>x86_64</arch>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model>EPYC-Rome-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <vendor>AMD</vendor>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <microcode version='16777317'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <signature family='23' model='49' stepping='0'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='x2apic'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='tsc-deadline'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='osxsave'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='hypervisor'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='tsc_adjust'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='spec-ctrl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='stibp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='arch-capabilities'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='cmp_legacy'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='topoext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='virt-ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='lbrv'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='tsc-scale'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='vmcb-clean'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='pause-filter'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='pfthreshold'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='svme-addr-chk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='rdctl-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='skip-l1dfl-vmentry'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='mds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature name='pschange-mc-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <pages unit='KiB' size='4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <pages unit='KiB' size='2048'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <pages unit='KiB' size='1048576'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <power_management>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <suspend_mem/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </power_management>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <iommu support='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <migration_features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <live/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <uri_transports>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <uri_transport>tcp</uri_transport>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <uri_transport>rdma</uri_transport>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </uri_transports>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </migration_features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <topology>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <cells num='1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <cell id='0'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          <memory unit='KiB'>7864104</memory>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          <pages unit='KiB' size='4'>1966026</pages>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          <pages unit='KiB' size='2048'>0</pages>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          <distances>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <sibling id='0' value='10'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          </distances>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          <cpus num='8'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:          </cpus>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        </cell>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </cells>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </topology>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <cache>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </cache>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <secmodel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model>selinux</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <doi>0</doi>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </secmodel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <secmodel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model>dac</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <doi>0</doi>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </secmodel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </host>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <guest>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <os_type>hvm</os_type>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <arch name='i686'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <wordsize>32</wordsize>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <domain type='qemu'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <domain type='kvm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </arch>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <pae/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <nonpae/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <apic default='on' toggle='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <cpuselection/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <deviceboot/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <externalSnapshot/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </guest>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <guest>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <os_type>hvm</os_type>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <arch name='x86_64'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <wordsize>64</wordsize>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <domain type='qemu'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <domain type='kvm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </arch>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <acpi default='on' toggle='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <apic default='on' toggle='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <cpuselection/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <deviceboot/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <disksnapshot default='on' toggle='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <externalSnapshot/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </guest>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 
Oct  2 08:05:26 np0005466030 nova_compute[230518]: </capabilities>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: #033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.778 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.781 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  2 08:05:26 np0005466030 nova_compute[230518]: <domainCapabilities>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <domain>kvm</domain>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <arch>i686</arch>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <vcpu max='4096'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <iothreads supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <os supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <enum name='firmware'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <loader supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>rom</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pflash</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='readonly'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>yes</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='secure'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </loader>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='maximumMigratable'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <vendor>AMD</vendor>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='succor'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='custom' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-128'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-256'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-512'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SierraForest'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='athlon'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='athlon-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='core2duo'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='core2duo-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='coreduo'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='coreduo-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='n270'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='n270-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='phenom'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='phenom-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <memoryBacking supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <enum name='sourceType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>file</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>anonymous</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>memfd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </memoryBacking>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <disk supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='diskDevice'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>disk</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>cdrom</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>floppy</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>lun</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>fdc</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>sata</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <graphics supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vnc</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>egl-headless</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>dbus</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <video supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='modelType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vga</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>cirrus</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>none</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>bochs</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>ramfb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <hostdev supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='mode'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>subsystem</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='startupPolicy'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>mandatory</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>requisite</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>optional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='subsysType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pci</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='capsType'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='pciBackend'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </hostdev>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <rng supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>random</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>egd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <filesystem supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='driverType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>path</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>handle</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtiofs</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </filesystem>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <tpm supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>tpm-tis</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>tpm-crb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>emulator</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>external</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendVersion'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>2.0</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </tpm>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <redirdev supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </redirdev>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <channel supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pty</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>unix</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </channel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <crypto supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>qemu</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </crypto>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <interface supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>passt</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <panic supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>isa</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>hyperv</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </panic>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <gic supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <genid supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <backup supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <async-teardown supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <ps2 supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <sev supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <sgx supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <hyperv supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='features'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>relaxed</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vapic</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>spinlocks</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vpindex</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>runtime</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>synic</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>stimer</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>reset</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vendor_id</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>frequencies</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>reenlightenment</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>tlbflush</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>ipi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>avic</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>emsr_bitmap</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>xmm_input</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </hyperv>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <launchSecurity supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: </domainCapabilities>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.786 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  2 08:05:26 np0005466030 nova_compute[230518]: <domainCapabilities>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <domain>kvm</domain>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <arch>i686</arch>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <vcpu max='240'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <iothreads supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <os supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <enum name='firmware'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <loader supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>rom</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pflash</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='readonly'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>yes</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='secure'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </loader>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='maximumMigratable'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <vendor>AMD</vendor>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='succor'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='custom' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-128'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-256'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-512'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SierraForest'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='athlon'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='athlon-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='core2duo'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='core2duo-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='coreduo'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='coreduo-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='n270'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='n270-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='phenom'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='phenom-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <memoryBacking supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <enum name='sourceType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>file</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>anonymous</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>memfd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </memoryBacking>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <disk supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='diskDevice'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>disk</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>cdrom</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>floppy</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>lun</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>ide</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>fdc</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>sata</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <graphics supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vnc</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>egl-headless</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>dbus</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <video supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='modelType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vga</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>cirrus</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>none</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>bochs</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>ramfb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <hostdev supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='mode'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>subsystem</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='startupPolicy'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>mandatory</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>requisite</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>optional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='subsysType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pci</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='capsType'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='pciBackend'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </hostdev>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <rng supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>random</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>egd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <filesystem supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='driverType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>path</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>handle</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtiofs</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </filesystem>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <tpm supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>tpm-tis</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>tpm-crb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>emulator</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>external</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendVersion'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>2.0</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </tpm>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <redirdev supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </redirdev>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <channel supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pty</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>unix</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </channel>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <crypto supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>qemu</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </crypto>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <interface supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='backendType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>passt</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <panic supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>isa</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>hyperv</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </panic>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <gic supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <genid supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <backup supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <async-teardown supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <ps2 supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <sev supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <sgx supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <hyperv supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='features'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>relaxed</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vapic</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>spinlocks</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vpindex</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>runtime</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>synic</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>stimer</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>reset</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vendor_id</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>frequencies</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>reenlightenment</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>tlbflush</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>ipi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>avic</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>emsr_bitmap</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>xmm_input</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </hyperv>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <launchSecurity supported='no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: </domainCapabilities>
Oct  2 08:05:26 np0005466030 nova_compute[230518]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.837 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  2 08:05:26 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.841 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  2 08:05:26 np0005466030 nova_compute[230518]: <domainCapabilities>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <domain>kvm</domain>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <arch>x86_64</arch>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <vcpu max='4096'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <iothreads supported='yes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <os supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <enum name='firmware'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>efi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <loader supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>rom</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pflash</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='readonly'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>yes</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='secure'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>yes</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </loader>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='maximumMigratable'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <vendor>AMD</vendor>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='succor'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <mode name='custom' supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-128'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-256'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx10-512'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SierraForest'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='athlon'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='athlon-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='core2duo'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='core2duo-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='coreduo'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='coreduo-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='n270'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='n270-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='phenom'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <blockers model='phenom-v1'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <memoryBacking supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <enum name='sourceType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>file</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>anonymous</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <value>memfd</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  </memoryBacking>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <disk supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='diskDevice'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>disk</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>cdrom</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>floppy</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>lun</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>fdc</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>sata</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <graphics supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vnc</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>egl-headless</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>dbus</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <video supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='modelType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>vga</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>cirrus</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>none</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>bochs</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>ramfb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <hostdev supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='mode'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>subsystem</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='startupPolicy'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>mandatory</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>requisite</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>optional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='subsysType'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>pci</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='capsType'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='pciBackend'/>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    </hostdev>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:    <rng supported='yes'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:26 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>random</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>egd</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <filesystem supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='driverType'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>path</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>handle</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtiofs</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </filesystem>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <tpm supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>tpm-tis</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>tpm-crb</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>emulator</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>external</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendVersion'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>2.0</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </tpm>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <redirdev supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </redirdev>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <channel supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>pty</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>unix</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </channel>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <crypto supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>qemu</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </crypto>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <interface supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendType'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>passt</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <panic supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>isa</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>hyperv</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </panic>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <gic supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <genid supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <backup supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <async-teardown supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <ps2 supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <sev supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <sgx supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <hyperv supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='features'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>relaxed</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vapic</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>spinlocks</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vpindex</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>runtime</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>synic</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>stimer</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>reset</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vendor_id</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>frequencies</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>reenlightenment</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>tlbflush</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>ipi</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>avic</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>emsr_bitmap</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>xmm_input</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </hyperv>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <launchSecurity supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:05:27 np0005466030 nova_compute[230518]: </domainCapabilities>
Oct  2 08:05:27 np0005466030 nova_compute[230518]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.913 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  2 08:05:27 np0005466030 nova_compute[230518]: <domainCapabilities>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <path>/usr/libexec/qemu-kvm</path>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <domain>kvm</domain>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <arch>x86_64</arch>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <vcpu max='240'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <iothreads supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <os supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <enum name='firmware'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <loader supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>rom</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>pflash</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='readonly'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>yes</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='secure'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>no</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </loader>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <cpu>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <mode name='host-passthrough' supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='hostPassthroughMigratable'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <mode name='maximum' supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='maximumMigratable'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>on</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>off</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <mode name='host-model' supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <vendor>AMD</vendor>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='x2apic'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-deadline'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='hypervisor'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc_adjust'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='spec-ctrl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='stibp'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='arch-capabilities'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='ssbd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='cmp_legacy'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='overflow-recov'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='succor'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='ibrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='amd-ssbd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='virt-ssbd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='lbrv'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='tsc-scale'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='vmcb-clean'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='flushbyasid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='pause-filter'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='pfthreshold'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='svme-addr-chk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='rdctl-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='mds-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='pschange-mc-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='gds-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='require' name='rfds-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <feature policy='disable' name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <mode name='custom' supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Broadwell-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cascadelake-Server-v5'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Cooperlake-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Denverton'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Denverton-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Dhyana-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Genoa-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='auto-ibrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Milan-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amd-psfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='no-nested-data-bp'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='null-sel-clr-base'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='stibp-always-on'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-Rome-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='EPYC-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='GraniteRapids-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx10'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx10-128'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx10-256'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx10-512'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='prefetchiti'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Haswell-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-noTSX'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v5'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v6'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Icelake-Server-v7'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='IvyBridge-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='KnightsMill-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-4fmaps'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-4vnniw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512er'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512pf'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G4-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Opteron_G5-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fma4'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tbm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xop'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='SapphireRapids-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='amx-tile'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-bf16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-fp16'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512-vpopcntdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bitalg'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vbmi2'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrc'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fzrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='la57'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='taa-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='tsx-ldtrk'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xfd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='SierraForest'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='SierraForest-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-ifma'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-ne-convert'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx-vnni-int8'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='bus-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cmpccxadd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fbsdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='fsrs'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ibrs-all'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mcdt-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pbrsb-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='psdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='sbdr-ssdp-no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='serialize'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vaes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='vpclmulqdq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Client-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='hle'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='rtm'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Skylake-Server-v5'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512bw'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512cd'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512dq'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512f'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='avx512vl'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='invpcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pcid'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='pku'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Snowridge'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='mpx'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v2'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v3'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='core-capability'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='split-lock-detect'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='Snowridge-v4'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='cldemote'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='erms'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='gfni'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdir64b'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='movdiri'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='xsaves'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='athlon'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='athlon-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='core2duo'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='core2duo-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='coreduo'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='coreduo-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='n270'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='n270-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='ss'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='phenom'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <blockers model='phenom-v1'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnow'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <feature name='3dnowext'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </blockers>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </mode>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <memoryBacking supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <enum name='sourceType'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <value>file</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <value>anonymous</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <value>memfd</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  </memoryBacking>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <disk supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='diskDevice'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>disk</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>cdrom</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>floppy</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>lun</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>ide</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>fdc</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>sata</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <graphics supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vnc</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>egl-headless</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>dbus</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <video supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='modelType'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vga</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>cirrus</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>none</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>bochs</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>ramfb</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <hostdev supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='mode'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>subsystem</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='startupPolicy'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>mandatory</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>requisite</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>optional</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='subsysType'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>pci</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>scsi</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='capsType'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='pciBackend'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </hostdev>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <rng supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio-transitional</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtio-non-transitional</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>random</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>egd</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <filesystem supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='driverType'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>path</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>handle</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>virtiofs</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </filesystem>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <tpm supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>tpm-tis</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>tpm-crb</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>emulator</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>external</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendVersion'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>2.0</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </tpm>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <redirdev supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='bus'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>usb</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </redirdev>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <channel supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>pty</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>unix</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </channel>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <crypto supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='type'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>qemu</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendModel'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>builtin</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </crypto>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <interface supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='backendType'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>default</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>passt</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <panic supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='model'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>isa</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>hyperv</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </panic>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <gic supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <vmcoreinfo supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <genid supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <backingStoreInput supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <backup supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <async-teardown supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <ps2 supported='yes'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <sev supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <sgx supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <hyperv supported='yes'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      <enum name='features'>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>relaxed</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vapic</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>spinlocks</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vpindex</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>runtime</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>synic</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>stimer</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>reset</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>vendor_id</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>frequencies</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>reenlightenment</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>tlbflush</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>ipi</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>avic</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>emsr_bitmap</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:        <value>xmm_input</value>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:      </enum>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    </hyperv>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:    <launchSecurity supported='no'/>
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:05:27 np0005466030 nova_compute[230518]: </domainCapabilities>
Oct  2 08:05:27 np0005466030 nova_compute[230518]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.969 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.970 2 INFO nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Secure Boot support detected#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.971 2 INFO nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.971 2 INFO nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.979 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] cpu compare xml: <cpu match="exact">
Oct  2 08:05:27 np0005466030 nova_compute[230518]:  <model>Nehalem</model>
Oct  2 08:05:27 np0005466030 nova_compute[230518]: </cpu>
Oct  2 08:05:27 np0005466030 nova_compute[230518]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:26.981 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.008 2 INFO nova.virt.node [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Determined node identity 730da6ce-9754-46f0-88e3-0019d056443f from /var/lib/nova/compute_id#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.025 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Verified node 730da6ce-9754-46f0-88e3-0019d056443f matches my host compute-1.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.045 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.089 2 ERROR nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Could not retrieve compute node resource provider 730da6ce-9754-46f0-88e3-0019d056443f and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-3b183d19-5264-4020-857c-220b8bdf190b"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-3b183d19-5264-4020-857c-220b8bdf190b"}]}#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.106 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.107 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.107 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.107 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.108 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1319389839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.625 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.771 2 WARNING nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.772 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5204MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.773 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.773 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:05:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.885 2 ERROR nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-1ea64c30-273d-46c5-8122-88de7e5e885a"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-1ea64c30-273d-46c5-8122-88de7e5e885a"}]}#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.885 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.886 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:05:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:27.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:27 np0005466030 python3.9[230960]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.970 2 INFO nova.scheduler.client.report [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [req-91e90fcf-724f-4024-b4e9-bedbd1a3f459] Created resource provider record via placement API for resource provider with UUID 730da6ce-9754-46f0-88e3-0019d056443f and name compute-1.ctlplane.example.com.#033[00m
Oct  2 08:05:27 np0005466030 nova_compute[230518]: 2025-10-02 12:05:27.988 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:05:28 np0005466030 systemd[1]: Started libpod-conmon-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc.scope.
Oct  2 08:05:28 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:05:28 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:28 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:28 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  2 08:05:28 np0005466030 podman[231001]: 2025-10-02 12:05:28.145216848 +0000 UTC m=+0.118392830 container init 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init)
Oct  2 08:05:28 np0005466030 podman[231001]: 2025-10-02 12:05:28.151836206 +0000 UTC m=+0.125012178 container start 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Oct  2 08:05:28 np0005466030 python3.9[230960]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  2 08:05:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Applying nova statedir ownership
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  2 08:05:28 np0005466030 nova_compute_init[231041]: INFO:nova_statedir:Nova statedir ownership complete
Oct  2 08:05:28 np0005466030 systemd[1]: libpod-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc.scope: Deactivated successfully.
Oct  2 08:05:28 np0005466030 podman[231054]: 2025-10-02 12:05:28.238154457 +0000 UTC m=+0.022433325 container died 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001)
Oct  2 08:05:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc-userdata-shm.mount: Deactivated successfully.
Oct  2 08:05:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay-1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f-merged.mount: Deactivated successfully.
Oct  2 08:05:28 np0005466030 podman[231054]: 2025-10-02 12:05:28.307586209 +0000 UTC m=+0.091865067 container cleanup 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251001)
Oct  2 08:05:28 np0005466030 systemd[1]: libpod-conmon-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc.scope: Deactivated successfully.
Oct  2 08:05:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:05:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2433992043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.430 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.435 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  2 08:05:28 np0005466030 nova_compute[230518]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.436 2 INFO nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.437 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.437 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.440 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt baseline CPU <cpu>
Oct  2 08:05:28 np0005466030 nova_compute[230518]:  <arch>x86_64</arch>
Oct  2 08:05:28 np0005466030 nova_compute[230518]:  <model>Nehalem</model>
Oct  2 08:05:28 np0005466030 nova_compute[230518]:  <vendor>AMD</vendor>
Oct  2 08:05:28 np0005466030 nova_compute[230518]:  <topology sockets="8" cores="1" threads="1"/>
Oct  2 08:05:28 np0005466030 nova_compute[230518]: </cpu>
Oct  2 08:05:28 np0005466030 nova_compute[230518]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.480 2 DEBUG nova.scheduler.client.report [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updated inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.480 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating resource provider 730da6ce-9754-46f0-88e3-0019d056443f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.481 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.592 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating resource provider 730da6ce-9754-46f0-88e3-0019d056443f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.622 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.623 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.623 2 DEBUG nova.service [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.696 2 DEBUG nova.service [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  2 08:05:28 np0005466030 nova_compute[230518]: 2025-10-02 12:05:28.697 2 DEBUG nova.servicegroup.drivers.db [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  2 08:05:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:05:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:05:29 np0005466030 systemd[1]: session-50.scope: Deactivated successfully.
Oct  2 08:05:29 np0005466030 systemd[1]: session-50.scope: Consumed 2min 38.290s CPU time.
Oct  2 08:05:29 np0005466030 systemd-logind[795]: Session 50 logged out. Waiting for processes to exit.
Oct  2 08:05:29 np0005466030 systemd-logind[795]: Removed session 50.
Oct  2 08:05:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:29.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:30.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:31.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:32.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:33 np0005466030 podman[231108]: 2025-10-02 12:05:33.807300909 +0000 UTC m=+0.054615688 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:05:33 np0005466030 podman[231107]: 2025-10-02 12:05:33.847720379 +0000 UTC m=+0.094143950 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct  2 08:05:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:33.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:05:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:35.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:37.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:05:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6377 writes, 26K keys, 6377 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6377 writes, 1129 syncs, 5.65 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 429 writes, 666 keys, 429 commit groups, 1.0 writes per commit group, ingest: 0.21 MB, 0.00 MB/s#012Interval WAL: 429 writes, 198 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Oct  2 08:05:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:39.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:40.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:41.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:42.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:42 np0005466030 podman[231200]: 2025-10-02 12:05:42.808688959 +0000 UTC m=+0.061830804 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct  2 08:05:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:43.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:44 np0005466030 nova_compute[230518]: 2025-10-02 12:05:44.698 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:44 np0005466030 nova_compute[230518]: 2025-10-02 12:05:44.719 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:05:44 np0005466030 podman[231221]: 2025-10-02 12:05:44.797979395 +0000 UTC m=+0.052947714 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:05:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:45.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:46.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:47.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:49.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:51.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:52.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:53.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:54.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:55.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:05:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:57.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:05:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:05:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:05:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:05:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:59.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:05:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:06:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:00.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:01.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:03.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:04.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:04 np0005466030 podman[231243]: 2025-10-02 12:06:04.812188823 +0000 UTC m=+0.056102181 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:06:04 np0005466030 podman[231242]: 2025-10-02 12:06:04.842605101 +0000 UTC m=+0.088663597 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct  2 08:06:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:05.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:06.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:07.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:08.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:09.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:10.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:06:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3502 writes, 18K keys, 3502 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 3502 writes, 3502 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1367 writes, 6899 keys, 1367 commit groups, 1.0 writes per commit group, ingest: 14.87 MB, 0.02 MB/s#012Interval WAL: 1368 writes, 1368 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    109.0      0.19              0.06         9    0.021       0      0       0.0       0.0#012  L6      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    123.2    102.4      0.66              0.19         8    0.082     35K   4308       0.0       0.0#012 Sum      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     95.2    103.9      0.85              0.25        17    0.050     35K   4308       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5     93.5     93.6      0.56              0.17        10    0.056     23K   3040       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    123.2    102.4      0.66              0.19         8    0.082     35K   4308       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    110.2      0.19              0.06         8    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.07 MB/s write, 0.08 GB read, 0.07 MB/s read, 0.9 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 308.00 MB usage: 4.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(256,4.35 MB,1.4116%) FilterBlock(17,106.61 KB,0.0338022%) IndexBlock(17,212.30 KB,0.0673121%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:06:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:12.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:13 np0005466030 podman[231285]: 2025-10-02 12:06:13.807440811 +0000 UTC m=+0.058475735 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:06:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:15 np0005466030 podman[231305]: 2025-10-02 12:06:15.802108942 +0000 UTC m=+0.057027439 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:06:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:15.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:18.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:19.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:23.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:24.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:06:25.906 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:06:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:06:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:25.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.083 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.084 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.084 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.084 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.152 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.152 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.153 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.153 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.153 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:26.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2515429145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.591 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.725 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.726 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5301MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.726 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.727 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.809 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.810 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:06:26 np0005466030 nova_compute[230518]: 2025-10-02 12:06:26.833 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:06:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:06:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1963014453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:06:27 np0005466030 nova_compute[230518]: 2025-10-02 12:06:27.282 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:06:27 np0005466030 nova_compute[230518]: 2025-10-02 12:06:27.288 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:06:27 np0005466030 nova_compute[230518]: 2025-10-02 12:06:27.303 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:06:27 np0005466030 nova_compute[230518]: 2025-10-02 12:06:27.305 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:06:27 np0005466030 nova_compute[230518]: 2025-10-02 12:06:27.305 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:06:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:30.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:35 np0005466030 podman[231396]: 2025-10-02 12:06:35.384714968 +0000 UTC m=+0.087209842 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:06:35 np0005466030 podman[231395]: 2025-10-02 12:06:35.393211226 +0000 UTC m=+0.098370594 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:06:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:06:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:06:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:06:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:06:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:06:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:06:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:06:44 np0005466030 podman[231571]: 2025-10-02 12:06:44.235098508 +0000 UTC m=+0.056420430 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  2 08:06:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:44.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:46.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:46.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:46 np0005466030 podman[231617]: 2025-10-02 12:06:46.803438867 +0000 UTC m=+0.058209316 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:06:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:48.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:48.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:06:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:50.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:06:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:52.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:56.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:06:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:58.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:06:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:06:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:06:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:58.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:00.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:02.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:07:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:07:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:04.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:05 np0005466030 podman[231638]: 2025-10-02 12:07:05.801346325 +0000 UTC m=+0.054217280 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:07:05 np0005466030 podman[231637]: 2025-10-02 12:07:05.832146376 +0000 UTC m=+0.089941806 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:07:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:06.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:08.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:10.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:12.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:13 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  2 08:07:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:14 np0005466030 podman[231680]: 2025-10-02 12:07:14.810427099 +0000 UTC m=+0.065174566 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct  2 08:07:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:16.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:17 np0005466030 podman[231700]: 2025-10-02 12:07:17.799509314 +0000 UTC m=+0.049064208 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:07:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:18.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:18.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:20.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:20.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:22.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:22.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:24.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:07:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:07:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:07:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:07:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:26.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:07:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:26.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.298 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.316 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.316 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.316 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.332 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.332 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.373 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.373 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.373 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.374 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.374 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1801310458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.823 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.959 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.960 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5336MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.960 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:07:27 np0005466030 nova_compute[230518]: 2025-10-02 12:07:27.960 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.025 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.025 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.039 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:07:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:07:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:28.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:07:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:28.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:07:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/135022925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.491 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.496 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.516 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.518 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:07:28 np0005466030 nova_compute[230518]: 2025-10-02 12:07:28.518 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:07:29 np0005466030 nova_compute[230518]: 2025-10-02 12:07:29.240 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:29 np0005466030 nova_compute[230518]: 2025-10-02 12:07:29.241 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:29 np0005466030 nova_compute[230518]: 2025-10-02 12:07:29.241 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:07:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:07:29.268 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:07:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:07:29.269 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:07:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:07:29.270 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:07:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:30.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:36 np0005466030 podman[231765]: 2025-10-02 12:07:36.79001463 +0000 UTC m=+0.046368533 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:07:36 np0005466030 podman[231764]: 2025-10-02 12:07:36.887235555 +0000 UTC m=+0.146621694 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  2 08:07:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:38.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:40.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:42.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:42.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:44.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:44.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.558507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864558588, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2338, "num_deletes": 251, "total_data_size": 5946111, "memory_usage": 6010416, "flush_reason": "Manual Compaction"}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864577263, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3896006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17735, "largest_seqno": 20068, "table_properties": {"data_size": 3886472, "index_size": 6092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18976, "raw_average_key_size": 20, "raw_value_size": 3867534, "raw_average_value_size": 4088, "num_data_blocks": 272, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406636, "oldest_key_time": 1759406636, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 18823 microseconds, and 7941 cpu microseconds.
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.577339) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3896006 bytes OK
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.577359) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.579120) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.579136) EVENT_LOG_v1 {"time_micros": 1759406864579131, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.579155) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5935794, prev total WAL file size 5935794, number of live WAL files 2.
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3804KB)], [36(7579KB)]
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864580656, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 11657781, "oldest_snapshot_seqno": -1}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4440 keys, 9637537 bytes, temperature: kUnknown
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864626906, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9637537, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9605291, "index_size": 20040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110845, "raw_average_key_size": 24, "raw_value_size": 9522243, "raw_average_value_size": 2144, "num_data_blocks": 832, "num_entries": 4440, "num_filter_entries": 4440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.627116) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9637537 bytes
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.628216) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.7 rd, 208.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4959, records dropped: 519 output_compression: NoCompression
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.628234) EVENT_LOG_v1 {"time_micros": 1759406864628225, "job": 20, "event": "compaction_finished", "compaction_time_micros": 46320, "compaction_time_cpu_micros": 20910, "output_level": 6, "num_output_files": 1, "total_output_size": 9637537, "num_input_records": 4959, "num_output_records": 4440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864628964, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864630309, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:07:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:07:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:07:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:07:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:07:45 np0005466030 podman[231938]: 2025-10-02 12:07:45.79548427 +0000 UTC m=+0.050106761 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:07:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:46.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:46.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:48.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:48.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:48 np0005466030 podman[231958]: 2025-10-02 12:07:48.800037813 +0000 UTC m=+0.054984245 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:07:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:50.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:50.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:52.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:07:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:07:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:54.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:54.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:07:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:07:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:07:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:58.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:07:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:07:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:07:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:00.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:00.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:02.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:02.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:04.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:06.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:07 np0005466030 podman[232031]: 2025-10-02 12:08:07.796150502 +0000 UTC m=+0.049886459 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 08:08:07 np0005466030 podman[232030]: 2025-10-02 12:08:07.822649882 +0000 UTC m=+0.078935490 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:08:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:10.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:12.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:14.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:16.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:16.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:16 np0005466030 podman[232075]: 2025-10-02 12:08:16.809188889 +0000 UTC m=+0.060508827 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:08:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:18.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:19 np0005466030 podman[232096]: 2025-10-02 12:08:19.811429937 +0000 UTC m=+0.063546533 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:08:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:20.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:24.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:24.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:08:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:08:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:08:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:26.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:27 np0005466030 nova_compute[230518]: 2025-10-02 12:08:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:27 np0005466030 nova_compute[230518]: 2025-10-02 12:08:27.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:08:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.187 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.187 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:28.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.224 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.225 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.226 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3330560456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.704 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.890 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.892 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5333MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.893 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:08:28 np0005466030 nova_compute[230518]: 2025-10-02 12:08:28.893 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:08:29 np0005466030 nova_compute[230518]: 2025-10-02 12:08:29.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:08:29 np0005466030 nova_compute[230518]: 2025-10-02 12:08:29.073 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:08:29 np0005466030 nova_compute[230518]: 2025-10-02 12:08:29.097 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:08:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:08:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2031141129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:08:29 np0005466030 nova_compute[230518]: 2025-10-02 12:08:29.572 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:08:29 np0005466030 nova_compute[230518]: 2025-10-02 12:08:29.580 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:08:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:30.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:30 np0005466030 nova_compute[230518]: 2025-10-02 12:08:30.585 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:08:30 np0005466030 nova_compute[230518]: 2025-10-02 12:08:30.586 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:08:30 np0005466030 nova_compute[230518]: 2025-10-02 12:08:30.586 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:08:31 np0005466030 nova_compute[230518]: 2025-10-02 12:08:31.451 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:31 np0005466030 nova_compute[230518]: 2025-10-02 12:08:31.452 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:08:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:32.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:32 np0005466030 systemd[1]: packagekit.service: Deactivated successfully.
Oct  2 08:08:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:32.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:34.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:38 np0005466030 podman[232162]: 2025-10-02 12:08:38.84222933 +0000 UTC m=+0.083140304 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:08:38 np0005466030 podman[232161]: 2025-10-02 12:08:38.84260015 +0000 UTC m=+0.085936021 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:08:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:40.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:40.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:42.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - - [02/Oct/2025:12:08:42.649 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.000000000s
Oct  2 08:08:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:44.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:46.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:46.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:47 np0005466030 podman[232205]: 2025-10-02 12:08:47.809883389 +0000 UTC m=+0.066420023 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:08:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct  2 08:08:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:48.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct  2 08:08:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:50.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:50 np0005466030 podman[232225]: 2025-10-02 12:08:50.807279375 +0000 UTC m=+0.060739884 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:08:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:52.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  2 08:08:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct  2 08:08:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:08:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:08:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:08:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:08:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct  2 08:08:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:08:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:08:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct  2 08:08:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:08:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:08:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:58.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:08:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:08:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:08:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:00.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:09:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:02.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:09:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:04.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:06.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:06.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:09:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:08.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:09:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:09 np0005466030 podman[232403]: 2025-10-02 12:09:09.703327472 +0000 UTC m=+0.073847249 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:09:09 np0005466030 podman[232402]: 2025-10-02 12:09:09.703356944 +0000 UTC m=+0.078710533 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct  2 08:09:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:10.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:09:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:09:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:12.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:09:13.701 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:09:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:09:13.702 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:09:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:09:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:09:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:09:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:16.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:09:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:18.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:18 np0005466030 podman[232470]: 2025-10-02 12:09:18.805044727 +0000 UTC m=+0.058381490 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:09:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:21 np0005466030 podman[232490]: 2025-10-02 12:09:21.810948732 +0000 UTC m=+0.062167029 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:09:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:09:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:09:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:09:23.704 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:09:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct  2 08:09:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct  2 08:09:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:09:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:09:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:09:25.909 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:27 np0005466030 nova_compute[230518]: 2025-10-02 12:09:27.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:27 np0005466030 nova_compute[230518]: 2025-10-02 12:09:27.065 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:27 np0005466030 nova_compute[230518]: 2025-10-02 12:09:27.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:09:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:28 np0005466030 nova_compute[230518]: 2025-10-02 12:09:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:28.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.093 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1708688043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.657 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.658 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5332MB free_disk=20.986618041992188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.658 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.659 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:09:29 np0005466030 nova_compute[230518]: 2025-10-02 12:09:29.755 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1664810939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:30 np0005466030 nova_compute[230518]: 2025-10-02 12:09:30.175 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:30 np0005466030 nova_compute[230518]: 2025-10-02 12:09:30.180 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:30 np0005466030 nova_compute[230518]: 2025-10-02 12:09:30.204 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:30 np0005466030 nova_compute[230518]: 2025-10-02 12:09:30.206 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:09:30 np0005466030 nova_compute[230518]: 2025-10-02 12:09:30.206 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:30.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:30.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.202 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.202 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.203 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.203 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.221 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.222 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.222 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:31 np0005466030 nova_compute[230518]: 2025-10-02 12:09:31.222 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:09:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:32.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:09:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:09:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:09:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:09:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:34.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.579 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.580 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.610 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.685 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.685 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.693 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.694 2 INFO nova.compute.claims [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:09:35 np0005466030 nova_compute[230518]: 2025-10-02 12:09:35.881 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1063715503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.304 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.309 2 DEBUG nova.compute.provider_tree [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:09:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.342 2 DEBUG nova.scheduler.client.report [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:09:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.385 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.386 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.459 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.460 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.499 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.541 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:09:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.695 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.696 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.697 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Creating image(s)#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.721 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.743 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.766 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.769 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:36 np0005466030 nova_compute[230518]: 2025-10-02 12:09:36.770 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:37 np0005466030 nova_compute[230518]: 2025-10-02 12:09:37.550 2 DEBUG nova.virt.libvirt.imagebackend [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:09:37 np0005466030 nova_compute[230518]: 2025-10-02 12:09:37.909 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Automatically allocating a network for project fa15236c63df4c43bf19989029fcda0f. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Oct  2 08:09:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:39 np0005466030 podman[232631]: 2025-10-02 12:09:39.797259628 +0000 UTC m=+0.045658567 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:09:39 np0005466030 podman[232630]: 2025-10-02 12:09:39.82513337 +0000 UTC m=+0.075906834 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  2 08:09:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:40.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:40.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.410 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.463 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.464 2 DEBUG nova.virt.images [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] 423b8b5f-aab8-418b-8fad-d82c90818bdd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.466 2 DEBUG nova.privsep.utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.466 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.620 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.625 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.694 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.695 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.718 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:41 np0005466030 nova_compute[230518]: 2025-10-02 12:09:41.722 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3affd040-669b-4cde-a697-00b991236a6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.116 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3affd040-669b-4cde-a697-00b991236a6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.209 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] resizing rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:09:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:42.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.443 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'migration_context' on Instance uuid 3affd040-669b-4cde-a697-00b991236a6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.470 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.471 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Ensure instance console log exists: /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.471 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.471 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:42 np0005466030 nova_compute[230518]: 2025-10-02 12:09:42.472 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:44.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:46.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:46.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:48.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.472 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.475 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.535 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:09:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:09:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:48.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.656 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.657 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.665 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.666 2 INFO nova.compute.claims [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:09:48 np0005466030 nova_compute[230518]: 2025-10-02 12:09:48.901 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3418279144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.402 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.408 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.492 2 ERROR nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [req-4a88711d-a07c-4f50-b567-89b26fb3e9fc] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 730da6ce-9754-46f0-88e3-0019d056443f.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-4a88711d-a07c-4f50-b567-89b26fb3e9fc"}]}#033[00m
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.528 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.595 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.595 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.651 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.724 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:09:49 np0005466030 podman[232821]: 2025-10-02 12:09:49.806200105 +0000 UTC m=+0.058193574 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:09:49 np0005466030 nova_compute[230518]: 2025-10-02 12:09:49.896 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:09:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:09:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:09:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3378946537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.367 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.373 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.490 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updated inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.490 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating resource provider 730da6ce-9754-46f0-88e3-0019d056443f generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.491 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.538 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.539 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:09:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:50.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.625 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.626 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.674 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.710 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.868 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.869 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.870 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Creating image(s)#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.905 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:50 np0005466030 nova_compute[230518]: 2025-10-02 12:09:50.939 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.006 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.011 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.079 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.080 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.081 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.081 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.117 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.123 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.389 2 WARNING oslo_policy.policy [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.390 2 WARNING oslo_policy.policy [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.393 2 DEBUG nova.policy [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '531ddb9812364f7b9743bd02a8ed797f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2c66662015f74444b15ea4b3d8644714', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.823 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:09:51 np0005466030 nova_compute[230518]: 2025-10-02 12:09:51.888 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] resizing rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:09:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:52.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:52 np0005466030 podman[233011]: 2025-10-02 12:09:52.805257702 +0000 UTC m=+0.060703583 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:09:52 np0005466030 nova_compute[230518]: 2025-10-02 12:09:52.909 2 DEBUG nova.objects.instance [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'migration_context' on Instance uuid c6cef7fd-49cb-4781-97ad-027e835dcc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:09:52 np0005466030 nova_compute[230518]: 2025-10-02 12:09:52.933 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:09:52 np0005466030 nova_compute[230518]: 2025-10-02 12:09:52.934 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Ensure instance console log exists: /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:09:52 np0005466030 nova_compute[230518]: 2025-10-02 12:09:52.934 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:09:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:52 np0005466030 nova_compute[230518]: 2025-10-02 12:09:52.935 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:09:52 np0005466030 nova_compute[230518]: 2025-10-02 12:09:52.935 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:09:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:09:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:09:54 np0005466030 nova_compute[230518]: 2025-10-02 12:09:54.942 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Successfully created port: 567aae3a-5019-47d2-84ba-8de1184cf4f0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:09:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:09:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:09:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:58.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:09:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:09:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:09:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:58.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:09:58 np0005466030 nova_compute[230518]: 2025-10-02 12:09:58.881 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Automatically allocated network: {'id': 'b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'name': 'auto_allocated_network', 'tenant_id': 'fa15236c63df4c43bf19989029fcda0f', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['227aa610-f00f-4cec-b799-1839354b34be', 'c08cc57a-142e-470f-ae51-6b2f21e9e17e'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-10-02T12:09:38Z', 'updated_at': '2025-10-02T12:09:57Z', 'revision_number': 4, 'project_id': 'fa15236c63df4c43bf19989029fcda0f'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Oct  2 08:09:58 np0005466030 nova_compute[230518]: 2025-10-02 12:09:58.883 2 DEBUG nova.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b81237ef015d48dfa022b6761d706e36', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fa15236c63df4c43bf19989029fcda0f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:09:59 np0005466030 nova_compute[230518]: 2025-10-02 12:09:59.272 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Successfully updated port: 567aae3a-5019-47d2-84ba-8de1184cf4f0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:09:59 np0005466030 nova_compute[230518]: 2025-10-02 12:09:59.345 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:09:59 np0005466030 nova_compute[230518]: 2025-10-02 12:09:59.345 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquired lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:09:59 np0005466030 nova_compute[230518]: 2025-10-02 12:09:59.345 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:09:59 np0005466030 nova_compute[230518]: 2025-10-02 12:09:59.796 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:09:59 np0005466030 nova_compute[230518]: 2025-10-02 12:09:59.978 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Successfully created port: 7bdea026-3636-4861-a8a9-fcb0a82509ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:10:00 np0005466030 nova_compute[230518]: 2025-10-02 12:10:00.001 2 DEBUG nova.compute.manager [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:00 np0005466030 nova_compute[230518]: 2025-10-02 12:10:00.001 2 DEBUG nova.compute.manager [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing instance network info cache due to event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:10:00 np0005466030 nova_compute[230518]: 2025-10-02 12:10:00.001 2 DEBUG oslo_concurrency.lockutils [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:00.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 08:10:01 np0005466030 nova_compute[230518]: 2025-10-02 12:10:01.719 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Successfully updated port: 7bdea026-3636-4861-a8a9-fcb0a82509ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:10:01 np0005466030 nova_compute[230518]: 2025-10-02 12:10:01.759 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:01 np0005466030 nova_compute[230518]: 2025-10-02 12:10:01.759 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquired lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:01 np0005466030 nova_compute[230518]: 2025-10-02 12:10:01.759 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.164 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.248 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.281 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Releasing lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.282 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance network_info: |[{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.283 2 DEBUG oslo_concurrency.lockutils [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.283 2 DEBUG nova.network.neutron [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.286 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start _get_guest_xml network_info=[{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.291 2 WARNING nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.297 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.299 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.302 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.303 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.304 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.304 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:09:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='718594175',id=15,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1977279530',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.307 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.308 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.311 2 DEBUG nova.privsep.utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.312 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:02.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.505 2 DEBUG nova.compute.manager [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-changed-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.505 2 DEBUG nova.compute.manager [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Refreshing instance network info cache due to event network-changed-7bdea026-3636-4861-a8a9-fcb0a82509ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.506 2 DEBUG oslo_concurrency.lockutils [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:02.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1286063853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.812 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.843 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:02 np0005466030 nova_compute[230518]: 2025-10-02 12:10:02.848 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3848430570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.304 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.306 2 DEBUG nova.virt.libvirt.vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-870505644',id=5,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-e4cgmrkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=c6cef7fd-49cb-4781-97ad-027e835dcc5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.306 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.307 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.309 2 DEBUG nova.objects.instance [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6cef7fd-49cb-4781-97ad-027e835dcc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.341 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <uuid>c6cef7fd-49cb-4781-97ad-027e835dcc5c</uuid>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <name>instance-00000005</name>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-870505644</nova:name>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:10:02</nova:creationTime>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1977279530">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:user uuid="531ddb9812364f7b9743bd02a8ed797f">tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member</nova:user>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:project uuid="2c66662015f74444b15ea4b3d8644714">tempest-ServersWithSpecificFlavorTestJSON-957372394</nova:project>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <nova:port uuid="567aae3a-5019-47d2-84ba-8de1184cf4f0">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <entry name="serial">c6cef7fd-49cb-4781-97ad-027e835dcc5c</entry>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <entry name="uuid">c6cef7fd-49cb-4781-97ad-027e835dcc5c</entry>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:37:e0:d8"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <target dev="tap567aae3a-50"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/console.log" append="off"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:10:03 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:10:03 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:10:03 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:10:03 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.343 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Preparing to wait for external event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.343 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.343 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.344 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.344 2 DEBUG nova.virt.libvirt.vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-870505644',id=5,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-e4cgmrkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=c6cef7fd-49cb-4781-97ad-027e835dcc5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.344 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.345 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.346 2 DEBUG os_vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.419 2 DEBUG ovsdbapp.backend.ovs_idl [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.420 2 DEBUG ovsdbapp.backend.ovs_idl [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.420 2 DEBUG ovsdbapp.backend.ovs_idl [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:03 np0005466030 nova_compute[230518]: 2025-10-02 12:10:03.440 2 INFO oslo.privsep.daemon [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmprcfvmz_2/privsep.sock']#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.161 2 INFO oslo.privsep.daemon [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.028 732 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.032 732 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.034 732 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.034 732 INFO oslo.privsep.daemon [-] privsep daemon running as pid 732#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.322 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updating instance_info_cache with network_info: [{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:04.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.363 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Releasing lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.364 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance network_info: |[{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.364 2 DEBUG oslo_concurrency.lockutils [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.365 2 DEBUG nova.network.neutron [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Refreshing network info cache for port 7bdea026-3636-4861-a8a9-fcb0a82509ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.369 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start _get_guest_xml network_info=[{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.373 2 WARNING nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.378 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.379 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.382 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.383 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.385 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.385 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.386 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.386 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.386 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.388 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.388 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.388 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.391 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap567aae3a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap567aae3a-50, col_values=(('external_ids', {'iface-id': '567aae3a-5019-47d2-84ba-8de1184cf4f0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:e0:d8', 'vm-uuid': 'c6cef7fd-49cb-4781-97ad-027e835dcc5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:04 np0005466030 NetworkManager[44960]: <info>  [1759407004.5205] manager: (tap567aae3a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.530 2 INFO os_vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50')#033[00m
Oct  2 08:10:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:04.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.618 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.618 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.618 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No VIF found with MAC fa:16:3e:37:e0:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.619 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Using config drive#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.650 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3716205795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.890 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.923 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:04 np0005466030 nova_compute[230518]: 2025-10-02 12:10:04.928 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:10:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:10:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:10:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:10:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2617089515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.417 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.419 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-2',id=3,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=3affd040-669b-4cde-a697-00b991236a6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.419 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.420 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.422 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3affd040-669b-4cde-a697-00b991236a6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.450 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <uuid>3affd040-669b-4cde-a697-00b991236a6c</uuid>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <name>instance-00000003</name>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <nova:name>tempest-tempest.common.compute-instance-1858146006-2</nova:name>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:10:04</nova:creationTime>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:user uuid="b81237ef015d48dfa022b6761d706e36">tempest-AutoAllocateNetworkTest-1017519520-project-member</nova:user>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:project uuid="fa15236c63df4c43bf19989029fcda0f">tempest-AutoAllocateNetworkTest-1017519520</nova:project>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <nova:port uuid="7bdea026-3636-4861-a8a9-fcb0a82509ad">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="fdfe:381f:8400:1::7b" ipVersion="6"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.1.0.87" ipVersion="4"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <entry name="serial">3affd040-669b-4cde-a697-00b991236a6c</entry>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <entry name="uuid">3affd040-669b-4cde-a697-00b991236a6c</entry>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3affd040-669b-4cde-a697-00b991236a6c_disk">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3affd040-669b-4cde-a697-00b991236a6c_disk.config">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:46:60:3e"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <target dev="tap7bdea026-36"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/console.log" append="off"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:10:05 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:10:05 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:10:05 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:10:05 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.452 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Preparing to wait for external event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.452 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.452 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.453 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.453 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-2',id=3,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=3affd040-669b-4cde-a697-00b991236a6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.454 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.454 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.455 2 DEBUG os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bdea026-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.460 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7bdea026-36, col_values=(('external_ids', {'iface-id': '7bdea026-3636-4861-a8a9-fcb0a82509ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:60:3e', 'vm-uuid': '3affd040-669b-4cde-a697-00b991236a6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:05 np0005466030 NetworkManager[44960]: <info>  [1759407005.4628] manager: (tap7bdea026-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.470 2 INFO os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36')#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.476 2 DEBUG nova.network.neutron [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updated VIF entry in instance network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.477 2 DEBUG nova.network.neutron [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.530 2 DEBUG oslo_concurrency.lockutils [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.548 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.549 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.549 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No VIF found with MAC fa:16:3e:46:60:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.549 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Using config drive#033[00m
Oct  2 08:10:05 np0005466030 nova_compute[230518]: 2025-10-02 12:10:05.572 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:06.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.684 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Creating config drive at /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.689 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zxw_yai execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.707 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Creating config drive at /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.713 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpog__2ued execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.812 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zxw_yai" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.844 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.851 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config 3affd040-669b-4cde-a697-00b991236a6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.871 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpog__2ued" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.909 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:06 np0005466030 nova_compute[230518]: 2025-10-02 12:10:06.914 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.013 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config 3affd040-669b-4cde-a697-00b991236a6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.014 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deleting local config drive /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config because it was imported into RBD.#033[00m
Oct  2 08:10:07 np0005466030 systemd[1]: Starting libvirt secret daemon...
Oct  2 08:10:07 np0005466030 systemd[1]: Started libvirt secret daemon.
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.091 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.091 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deleting local config drive /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config because it was imported into RBD.#033[00m
Oct  2 08:10:07 np0005466030 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  2 08:10:07 np0005466030 kernel: tap7bdea026-36: entered promiscuous mode
Oct  2 08:10:07 np0005466030 NetworkManager[44960]: <info>  [1759407007.1449] manager: (tap7bdea026-36): new Tun device (/org/freedesktop/NetworkManager/Devices/27)
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00027|binding|INFO|Claiming lport 7bdea026-3636-4861-a8a9-fcb0a82509ad for this chassis.
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00028|binding|INFO|7bdea026-3636-4861-a8a9-fcb0a82509ad: Claiming fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005466030 kernel: tap567aae3a-50: entered promiscuous mode
Oct  2 08:10:07 np0005466030 NetworkManager[44960]: <info>  [1759407007.1546] manager: (tap567aae3a-50): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00029|if_status|INFO|Not updating pb chassis for 567aae3a-5019-47d2-84ba-8de1184cf4f0 now as sb is readonly
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.167 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], port_security=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.87/26 fdfe:381f:8400:1::7b/64', 'neutron:device_id': '3affd040-669b-4cde-a697-00b991236a6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7bdea026-3636-4861-a8a9-fcb0a82509ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.168 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7bdea026-3636-4861-a8a9-fcb0a82509ad in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 bound to our chassis#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.170 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.171 138374 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpaltl5tih/privsep.sock']#033[00m
Oct  2 08:10:07 np0005466030 systemd-udevd[233349]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:07 np0005466030 systemd-udevd[233350]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:07 np0005466030 NetworkManager[44960]: <info>  [1759407007.2085] device (tap567aae3a-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:10:07 np0005466030 NetworkManager[44960]: <info>  [1759407007.2095] device (tap7bdea026-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:10:07 np0005466030 NetworkManager[44960]: <info>  [1759407007.2102] device (tap567aae3a-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:10:07 np0005466030 NetworkManager[44960]: <info>  [1759407007.2108] device (tap7bdea026-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:10:07 np0005466030 systemd-machined[188247]: New machine qemu-1-instance-00000003.
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00030|binding|INFO|Claiming lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 for this chassis.
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00031|binding|INFO|567aae3a-5019-47d2-84ba-8de1184cf4f0: Claiming fa:16:3e:37:e0:d8 10.100.0.9
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00032|binding|INFO|Setting lport 7bdea026-3636-4861-a8a9-fcb0a82509ad ovn-installed in OVS
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00033|binding|INFO|Setting lport 7bdea026-3636-4861-a8a9-fcb0a82509ad up in Southbound
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.259 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:e0:d8 10.100.0.9'], port_security=['fa:16:3e:37:e0:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6cef7fd-49cb-4781-97ad-027e835dcc5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=567aae3a-5019-47d2-84ba-8de1184cf4f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00034|binding|INFO|Setting lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 up in Southbound
Oct  2 08:10:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:07Z|00035|binding|INFO|Setting lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 ovn-installed in OVS
Oct  2 08:10:07 np0005466030 systemd[1]: Started Virtual Machine qemu-1-instance-00000003.
Oct  2 08:10:07 np0005466030 nova_compute[230518]: 2025-10-02 12:10:07.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:07 np0005466030 systemd-machined[188247]: New machine qemu-2-instance-00000005.
Oct  2 08:10:07 np0005466030 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.911 138374 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.912 138374 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpaltl5tih/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.772 233418 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.776 233418 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.778 233418 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.778 233418 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233418#033[00m
Oct  2 08:10:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.914 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d201059d-8230-49cb-ae85-289c974a3f6e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.211 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.2105086, 3affd040-669b-4cde-a697-00b991236a6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.213 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Started (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.250 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.255 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.2121518, 3affd040-669b-4cde-a697-00b991236a6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.255 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.288 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.291 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.308 2 DEBUG nova.compute.manager [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.308 2 DEBUG oslo_concurrency.lockutils [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.309 2 DEBUG oslo_concurrency.lockutils [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.309 2 DEBUG oslo_concurrency.lockutils [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.310 2 DEBUG nova.compute.manager [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Processing event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.325 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.439 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.440 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.4391465, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.440 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Started (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.451 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.455 2 INFO nova.virt.libvirt.driver [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance spawned successfully.#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.455 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.490 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.494 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:08.550 233418 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:08.550 233418 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:08.551 233418 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.601 2 DEBUG nova.compute.manager [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.601 2 DEBUG oslo_concurrency.lockutils [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.602 2 DEBUG oslo_concurrency.lockutils [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.603 2 DEBUG oslo_concurrency.lockutils [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.603 2 DEBUG nova.compute.manager [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Processing event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.604 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:10:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:10:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:08.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.609 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.615 2 INFO nova.virt.libvirt.driver [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance spawned successfully.#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.616 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.684 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.685 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.439394, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.685 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.690 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.691 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.691 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.691 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.692 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.692 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.697 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.697 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.699 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.699 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.700 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.700 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.739 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.743 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.4514744, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.743 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.831 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.834 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.852 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 32.16 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.853 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.902 2 INFO nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 18.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.903 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.903 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.904 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.6078126, 3affd040-669b-4cde-a697-00b991236a6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.904 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.964 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.969 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:08 np0005466030 nova_compute[230518]: 2025-10-02 12:10:08.985 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 33.32 seconds to build instance.#033[00m
Oct  2 08:10:09 np0005466030 nova_compute[230518]: 2025-10-02 12:10:09.000 2 INFO nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 20.39 seconds to build instance.#033[00m
Oct  2 08:10:09 np0005466030 nova_compute[230518]: 2025-10-02 12:10:09.002 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 33.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:09 np0005466030 nova_compute[230518]: 2025-10-02 12:10:09.016 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:09 np0005466030 nova_compute[230518]: 2025-10-02 12:10:09.127 2 DEBUG nova.network.neutron [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updated VIF entry in instance network info cache for port 7bdea026-3636-4861-a8a9-fcb0a82509ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:10:09 np0005466030 nova_compute[230518]: 2025-10-02 12:10:09.128 2 DEBUG nova.network.neutron [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updating instance_info_cache with network_info: [{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:09 np0005466030 nova_compute[230518]: 2025-10-02 12:10:09.149 2 DEBUG oslo_concurrency.lockutils [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de39ce20-631b-4da3-8c1f-f1e2ebbb20ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.351 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4aadb38-81 in ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.354 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4aadb38-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.354 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2087b854-4568-4537-96ed-4ee0e1359e58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.359 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6f6a2c-d476-49e2-a0bd-49a5bf7c9f5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.386 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b13d56-7c83-4c21-8c42-4361acafd90f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.406 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2586f1e-4d1b-4a16-8827-d0405fe0f11e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.409 138374 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpvfaek5j7/privsep.sock']#033[00m
Oct  2 08:10:10 np0005466030 podman[233499]: 2025-10-02 12:10:10.02818553 +0000 UTC m=+0.063607815 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:10:10 np0005466030 podman[233498]: 2025-10-02 12:10:10.049338929 +0000 UTC m=+0.094847883 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.201 138374 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.202 138374 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvfaek5j7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.048 233568 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.052 233568 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.055 233568 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.055 233568 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233568#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.207 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34997fdb-7258-45d7-a607-00378d329731]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:10 np0005466030 nova_compute[230518]: 2025-10-02 12:10:10.448 2 DEBUG nova.compute.manager [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:10 np0005466030 nova_compute[230518]: 2025-10-02 12:10:10.449 2 DEBUG oslo_concurrency.lockutils [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:10 np0005466030 nova_compute[230518]: 2025-10-02 12:10:10.450 2 DEBUG oslo_concurrency.lockutils [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:10 np0005466030 nova_compute[230518]: 2025-10-02 12:10:10.450 2 DEBUG oslo_concurrency.lockutils [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:10 np0005466030 nova_compute[230518]: 2025-10-02 12:10:10.452 2 DEBUG nova.compute.manager [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] No waiting events found dispatching network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:10 np0005466030 nova_compute[230518]: 2025-10-02 12:10:10.452 2 WARNING nova.compute.manager [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received unexpected event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:10:10 np0005466030 nova_compute[230518]: 2025-10-02 12:10:10.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:10:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:10:10 np0005466030 podman[233695]: 2025-10-02 12:10:10.761927359 +0000 UTC m=+0.134952683 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.867 233568 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.867 233568 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.868 233568 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:10 np0005466030 podman[233695]: 2025-10-02 12:10:10.869624459 +0000 UTC m=+0.242649753 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.034 2 DEBUG nova.compute.manager [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.035 2 DEBUG oslo_concurrency.lockutils [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.036 2 DEBUG oslo_concurrency.lockutils [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.036 2 DEBUG oslo_concurrency.lockutils [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.036 2 DEBUG nova.compute.manager [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] No waiting events found dispatching network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.036 2 WARNING nova.compute.manager [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received unexpected event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad for instance with vm_state active and task_state None.#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.555 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[88fac29d-1a92-4341-87c8-c727163d4d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 NetworkManager[44960]: <info>  [1759407011.5650] manager: (tapb4aadb38-80): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.562 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[03266333-c53d-4547-a6b3-3ea9580cc2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 systemd-udevd[233812]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.592 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ef223854-ecfc-4ad7-998d-72aa5e66cf8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.599 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[82b5c6e9-9408-450c-85e9-1be284542bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.622119) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011622185, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1714, "num_deletes": 251, "total_data_size": 3948955, "memory_usage": 4012552, "flush_reason": "Manual Compaction"}
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct  2 08:10:11 np0005466030 NetworkManager[44960]: <info>  [1759407011.6257] device (tapb4aadb38-80): carrier: link connected
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011634304, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1572669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20073, "largest_seqno": 21782, "table_properties": {"data_size": 1567204, "index_size": 2669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14134, "raw_average_key_size": 20, "raw_value_size": 1555080, "raw_average_value_size": 2263, "num_data_blocks": 120, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406865, "oldest_key_time": 1759406865, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 12214 microseconds, and 5974 cpu microseconds.
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.633 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16072fe6-1b28-4952-8ad0-cf5a2e499fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.634349) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1572669 bytes OK
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.634368) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636082) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636096) EVENT_LOG_v1 {"time_micros": 1759407011636092, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636112) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3941074, prev total WAL file size 3941074, number of live WAL files 2.
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.637057) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1535KB)], [39(9411KB)]
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011637105, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11210206, "oldest_snapshot_seqno": -1}
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.654 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f74d128e-1d74-4295-b5e3-792d3e6f0353]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487518, 'reachable_time': 43647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233831, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.670 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d95f30c-8c52-4872-8780-e9125a9aa6c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:b633'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487518, 'tstamp': 487518}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233832, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4675 keys, 8370051 bytes, temperature: kUnknown
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011683308, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8370051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8338763, "index_size": 18506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 116178, "raw_average_key_size": 24, "raw_value_size": 8254017, "raw_average_value_size": 1765, "num_data_blocks": 765, "num_entries": 4675, "num_filter_entries": 4675, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.683640) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8370051 bytes
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.684980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.6 rd, 180.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(12.5) write-amplify(5.3) OK, records in: 5127, records dropped: 452 output_compression: NoCompression
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.685002) EVENT_LOG_v1 {"time_micros": 1759407011684991, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46393, "compaction_time_cpu_micros": 17752, "output_level": 6, "num_output_files": 1, "total_output_size": 8370051, "num_input_records": 5127, "num_output_records": 4675, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011685777, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.685 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0a3049-fbe3-4901-8e55-ca9d8efd5513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487518, 'reachable_time': 43647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 233833, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011687863, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.721 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa2e05c-0ebe-432a-9d8b-6af1b0cad257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2920d7-2db4-42e7-89c4-b579e223d3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.781 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.782 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.782 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4aadb38-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:11 np0005466030 NetworkManager[44960]: <info>  [1759407011.7847] manager: (tapb4aadb38-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct  2 08:10:11 np0005466030 kernel: tapb4aadb38-80: entered promiscuous mode
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.795 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4aadb38-80, col_values=(('external_ids', {'iface-id': 'de74dbb2-fac5-494f-b65c-51300143a2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:11Z|00036|binding|INFO|Releasing lport de74dbb2-fac5-494f-b65c-51300143a2da from this chassis (sb_readonly=0)
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:11 np0005466030 nova_compute[230518]: 2025-10-02 12:10:11.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.814 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.815 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6382c0-8ae9-418e-9549-6c06e81180f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.816 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:10:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.819 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'env', 'PROCESS_TAG=haproxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:10:12 np0005466030 podman[233974]: 2025-10-02 12:10:12.24146317 +0000 UTC m=+0.064707049 container create 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:10:12 np0005466030 systemd[1]: Started libpod-conmon-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8.scope.
Oct  2 08:10:12 np0005466030 podman[233974]: 2025-10-02 12:10:12.215878291 +0000 UTC m=+0.039122200 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:10:12 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:10:12 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887b11986ce41198ec71677adcd74a5ad1696c1e295797b15dba8f945811de2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:10:12 np0005466030 podman[233974]: 2025-10-02 12:10:12.32293178 +0000 UTC m=+0.146175679 container init 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:10:12 np0005466030 podman[233974]: 2025-10-02 12:10:12.330032435 +0000 UTC m=+0.153276314 container start 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:10:12 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : New worker (234003) forked
Oct  2 08:10:12 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : Loading success.
Oct  2 08:10:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.420 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 567aae3a-5019-47d2-84ba-8de1184cf4f0 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 unbound from our chassis#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.423 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38c94475-c52a-421c-9bc8-95fdc649b043#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.447 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31a60102-5bb8-41d8-b66d-4656cf71b8f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.448 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38c94475-c1 in ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.450 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38c94475-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.450 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c6da9a-efeb-401b-8ca7-d00fc7d5abd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.452 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[35dc54f7-181b-4051-b662-30d8255d985e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.474 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3a8791-8727-43bc-ab97-9da81924ac48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.487 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f052c6b3-791d-49cb-9275-41204cc8d26b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.518 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2b6782-b9aa-410d-933a-e7b75d4c72e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.524 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1a017c-ead3-4169-b704-05864b291bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 NetworkManager[44960]: <info>  [1759407012.5258] manager: (tap38c94475-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Oct  2 08:10:12 np0005466030 systemd-udevd[233828]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.559 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[eb36d33b-5963-4080-aa20-1e01a30e4e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.563 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d74466f3-468c-4b82-83fc-fc820d6616b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 NetworkManager[44960]: <info>  [1759407012.5865] device (tap38c94475-c0): carrier: link connected
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.594 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[55eb0073-fca5-44ba-95c2-d4f817c82f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.612 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9dcab5f-9495-4ca2-ae71-b8e3b28fc0c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487614, 'reachable_time': 23804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234037, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.633 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd6cb2b-d2d2-47a5-96b2-45e835c97b86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:d299'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487614, 'tstamp': 487614}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234038, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.653 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb05cff-3d82-426e-ab49-28f5f8bbc45c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487614, 'reachable_time': 23804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234039, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.687 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04100819-96dc-4496-b930-57036079232e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.751 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[872b2cc0-a88c-4d13-bed2-cc39cfb05720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.753 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.753 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.754 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38c94475-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:12 np0005466030 nova_compute[230518]: 2025-10-02 12:10:12.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:12 np0005466030 NetworkManager[44960]: <info>  [1759407012.7570] manager: (tap38c94475-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct  2 08:10:12 np0005466030 kernel: tap38c94475-c0: entered promiscuous mode
Oct  2 08:10:12 np0005466030 nova_compute[230518]: 2025-10-02 12:10:12.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.761 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38c94475-c0, col_values=(('external_ids', {'iface-id': 'cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:12 np0005466030 nova_compute[230518]: 2025-10-02 12:10:12.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:12Z|00037|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct  2 08:10:12 np0005466030 nova_compute[230518]: 2025-10-02 12:10:12.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.778 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:10:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4feac2e-412f-4710-85a4-ada2b136e3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.781 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-38c94475-c52a-421c-9bc8-95fdc649b043
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 38c94475-c52a-421c-9bc8-95fdc649b043
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:10:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.783 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'env', 'PROCESS_TAG=haproxy-38c94475-c52a-421c-9bc8-95fdc649b043', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38c94475-c52a-421c-9bc8-95fdc649b043.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:10:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:13 np0005466030 podman[234071]: 2025-10-02 12:10:13.245625542 +0000 UTC m=+0.112021017 container create b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:10:13 np0005466030 podman[234071]: 2025-10-02 12:10:13.158757892 +0000 UTC m=+0.025153387 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:10:13 np0005466030 systemd[1]: Started libpod-conmon-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89.scope.
Oct  2 08:10:13 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:10:13 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e673779b1bd623d71ccc62dcbdfdb0383cf4987b9cb72f96d48944541a34d5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:10:13 np0005466030 podman[234071]: 2025-10-02 12:10:13.403092207 +0000 UTC m=+0.269487682 container init b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:10:13 np0005466030 podman[234071]: 2025-10-02 12:10:13.409531831 +0000 UTC m=+0.275927306 container start b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:10:13 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : New worker (234093) forked
Oct  2 08:10:13 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : Loading success.
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4498] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/33)
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4506] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 08:10:13 np0005466030 nova_compute[230518]: 2025-10-02 12:10:13.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4521] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/34)
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4527] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4540] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4549] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4556] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 08:10:13 np0005466030 NetworkManager[44960]: <info>  [1759407013.4561] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  2 08:10:13 np0005466030 nova_compute[230518]: 2025-10-02 12:10:13.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:13Z|00038|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct  2 08:10:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:13Z|00039|binding|INFO|Releasing lport de74dbb2-fac5-494f-b65c-51300143a2da from this chassis (sb_readonly=0)
Oct  2 08:10:13 np0005466030 nova_compute[230518]: 2025-10-02 12:10:13.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:10:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:10:14 np0005466030 nova_compute[230518]: 2025-10-02 12:10:14.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:14 np0005466030 nova_compute[230518]: 2025-10-02 12:10:14.182 2 DEBUG nova.compute.manager [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:14 np0005466030 nova_compute[230518]: 2025-10-02 12:10:14.182 2 DEBUG nova.compute.manager [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing instance network info cache due to event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:10:14 np0005466030 nova_compute[230518]: 2025-10-02 12:10:14.182 2 DEBUG oslo_concurrency.lockutils [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:14 np0005466030 nova_compute[230518]: 2025-10-02 12:10:14.183 2 DEBUG oslo_concurrency.lockutils [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:14 np0005466030 nova_compute[230518]: 2025-10-02 12:10:14.183 2 DEBUG nova.network.neutron [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:10:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:14.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:14.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:15 np0005466030 nova_compute[230518]: 2025-10-02 12:10:15.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:16 np0005466030 nova_compute[230518]: 2025-10-02 12:10:16.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:16.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct  2 08:10:17 np0005466030 nova_compute[230518]: 2025-10-02 12:10:17.205 2 DEBUG nova.network.neutron [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updated VIF entry in instance network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:10:17 np0005466030 nova_compute[230518]: 2025-10-02 12:10:17.206 2 DEBUG nova.network.neutron [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:17 np0005466030 nova_compute[230518]: 2025-10-02 12:10:17.226 2 DEBUG oslo_concurrency.lockutils [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct  2 08:10:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:18.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:19.423 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:19 np0005466030 nova_compute[230518]: 2025-10-02 12:10:19.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:19.427 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:10:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:20 np0005466030 nova_compute[230518]: 2025-10-02 12:10:20.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:20.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:20 np0005466030 podman[234103]: 2025-10-02 12:10:20.816436442 +0000 UTC m=+0.068698241 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:10:20 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 08:10:21 np0005466030 nova_compute[230518]: 2025-10-02 12:10:21.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:10:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:22.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:23 np0005466030 podman[234173]: 2025-10-02 12:10:23.805089057 +0000 UTC m=+0.058988866 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 08:10:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:23Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:60:3e 10.1.0.87
Oct  2 08:10:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:24Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:60:3e 10.1.0.87
Oct  2 08:10:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:24Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:e0:d8 10.100.0.9
Oct  2 08:10:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:24Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:e0:d8 10.100.0.9
Oct  2 08:10:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:24.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:24.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:25 np0005466030 nova_compute[230518]: 2025-10-02 12:10:25.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:25.909 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:25.910 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:25.911 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:26 np0005466030 nova_compute[230518]: 2025-10-02 12:10:26.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:26 np0005466030 nova_compute[230518]: 2025-10-02 12:10:26.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:10:26 np0005466030 nova_compute[230518]: 2025-10-02 12:10:26.080 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:10:26 np0005466030 nova_compute[230518]: 2025-10-02 12:10:26.080 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:26 np0005466030 nova_compute[230518]: 2025-10-02 12:10:26.080 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:10:26 np0005466030 nova_compute[230518]: 2025-10-02 12:10:26.091 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:26 np0005466030 nova_compute[230518]: 2025-10-02 12:10:26.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct  2 08:10:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:26.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:27 np0005466030 nova_compute[230518]: 2025-10-02 12:10:27.116 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:27 np0005466030 nova_compute[230518]: 2025-10-02 12:10:27.117 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:10:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:27.430 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:29 np0005466030 nova_compute[230518]: 2025-10-02 12:10:29.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:29 np0005466030 nova_compute[230518]: 2025-10-02 12:10:29.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:30 np0005466030 nova_compute[230518]: 2025-10-02 12:10:30.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:30.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.102 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3606299484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:31 np0005466030 nova_compute[230518]: 2025-10-02 12:10:31.589 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.180 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.181 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.186 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.186 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.211 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.212 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.212 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.212 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.213 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.214 2 INFO nova.compute.manager [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Terminating instance#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.215 2 DEBUG nova.compute.manager [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:10:32 np0005466030 kernel: tap567aae3a-50 (unregistering): left promiscuous mode
Oct  2 08:10:32 np0005466030 NetworkManager[44960]: <info>  [1759407032.2708] device (tap567aae3a-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:10:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:32Z|00040|binding|INFO|Releasing lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 from this chassis (sb_readonly=0)
Oct  2 08:10:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:32Z|00041|binding|INFO|Setting lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 down in Southbound
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:32Z|00042|binding|INFO|Removing iface tap567aae3a-50 ovn-installed in OVS
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.294 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:e0:d8 10.100.0.9'], port_security=['fa:16:3e:37:e0:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6cef7fd-49cb-4781-97ad-027e835dcc5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=567aae3a-5019-47d2-84ba-8de1184cf4f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.295 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 567aae3a-5019-47d2-84ba-8de1184cf4f0 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 unbound from our chassis#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.297 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38c94475-c52a-421c-9bc8-95fdc649b043, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.298 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d95f3c-e5d1-43b8-8025-57c4d5201629]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.298 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace which is not needed anymore#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct  2 08:10:32 np0005466030 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 14.572s CPU time.
Oct  2 08:10:32 np0005466030 systemd-machined[188247]: Machine qemu-2-instance-00000005 terminated.
Oct  2 08:10:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:32.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.415 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.417 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4640MB free_disk=20.810245513916016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.417 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.417 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:32 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : haproxy version is 2.8.14-c23fe91
Oct  2 08:10:32 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : path to executable is /usr/sbin/haproxy
Oct  2 08:10:32 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [ALERT]    (234091) : Current worker (234093) exited with code 143 (Terminated)
Oct  2 08:10:32 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [WARNING]  (234091) : All workers exited. Exiting... (0)
Oct  2 08:10:32 np0005466030 systemd[1]: libpod-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89.scope: Deactivated successfully.
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.450 2 INFO nova.virt.libvirt.driver [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance destroyed successfully.#033[00m
Oct  2 08:10:32 np0005466030 podman[234241]: 2025-10-02 12:10:32.451105124 +0000 UTC m=+0.050670175 container died b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.450 2 DEBUG nova.objects.instance [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'resources' on Instance uuid c6cef7fd-49cb-4781-97ad-027e835dcc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89-userdata-shm.mount: Deactivated successfully.
Oct  2 08:10:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay-1e673779b1bd623d71ccc62dcbdfdb0383cf4987b9cb72f96d48944541a34d5b-merged.mount: Deactivated successfully.
Oct  2 08:10:32 np0005466030 podman[234241]: 2025-10-02 12:10:32.51232368 +0000 UTC m=+0.111888731 container cleanup b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:10:32 np0005466030 systemd[1]: libpod-conmon-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89.scope: Deactivated successfully.
Oct  2 08:10:32 np0005466030 podman[234281]: 2025-10-02 12:10:32.581331231 +0000 UTC m=+0.046667138 container remove b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[742639d3-1e00-4a25-a705-e5abc4821fa4]: (4, ('Thu Oct  2 12:10:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89)\nb1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89\nThu Oct  2 12:10:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89)\nb1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.588 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0602919-7938-42d8-9fd6-1758588cd8e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.589 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 kernel: tap38c94475-c0: left promiscuous mode
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.613 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68ae35a6-5992-4706-981b-272e0a96e8f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:32.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.634 2 DEBUG nova.virt.libvirt.vif [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-870505644',id=5,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-e4cgmrkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=c6cef7fd-49cb-4781-97ad-027e835dcc5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.635 2 DEBUG nova.network.os_vif_util [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.636 2 DEBUG nova.network.os_vif_util [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.636 2 DEBUG os_vif [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap567aae3a-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.639 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7ffb7a-f4b8-454b-8c63-cba6c2b55020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.640 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c85bb8b-cbe2-4903-a671-1b2225d41b05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:32 np0005466030 nova_compute[230518]: 2025-10-02 12:10:32.645 2 INFO os_vif [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50')#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.656 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3956a173-a554-4de9-b9e9-648d23ec495a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487607, 'reachable_time': 32466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234299, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 systemd[1]: run-netns-ovnmeta\x2d38c94475\x2dc52a\x2d421c\x2d9bc8\x2d95fdc649b043.mount: Deactivated successfully.
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.671 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:10:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.671 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[35e1e3e9-93ab-47ec-91fd-6dffa0ab3269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.114 2 DEBUG nova.compute.manager [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-unplugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG oslo_concurrency.lockutils [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG oslo_concurrency.lockutils [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG oslo_concurrency.lockutils [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG nova.compute.manager [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] No waiting events found dispatching network-vif-unplugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG nova.compute.manager [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-unplugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3affd040-669b-4cde-a697-00b991236a6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c6cef7fd-49cb-4781-97ad-027e835dcc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.228 2 INFO nova.virt.libvirt.driver [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deleting instance files /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c_del#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.228 2 INFO nova.virt.libvirt.driver [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deletion of /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c_del complete#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.315 2 DEBUG nova.virt.libvirt.host [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.316 2 INFO nova.virt.libvirt.host [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] UEFI support detected#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.317 2 INFO nova.compute.manager [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.318 2 DEBUG oslo.service.loopingcall [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.318 2 DEBUG nova.compute.manager [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.318 2 DEBUG nova.network.neutron [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.610 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.712 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.713 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.713 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.713 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.714 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.715 2 INFO nova.compute.manager [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Terminating instance#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.716 2 DEBUG nova.compute.manager [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:10:33 np0005466030 kernel: tap7bdea026-36 (unregistering): left promiscuous mode
Oct  2 08:10:33 np0005466030 NetworkManager[44960]: <info>  [1759407033.8278] device (tap7bdea026-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:10:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:33Z|00043|binding|INFO|Releasing lport 7bdea026-3636-4861-a8a9-fcb0a82509ad from this chassis (sb_readonly=0)
Oct  2 08:10:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:33Z|00044|binding|INFO|Setting lport 7bdea026-3636-4861-a8a9-fcb0a82509ad down in Southbound
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:33Z|00045|binding|INFO|Removing iface tap7bdea026-36 ovn-installed in OVS
Oct  2 08:10:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.850 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], port_security=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.87/26 fdfe:381f:8400:1::7b/64', 'neutron:device_id': '3affd040-669b-4cde-a697-00b991236a6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7bdea026-3636-4861-a8a9-fcb0a82509ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.855 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7bdea026-3636-4861-a8a9-fcb0a82509ad in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 unbound from our chassis#033[00m
Oct  2 08:10:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.866 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:10:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.867 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[669a53f3-0eec-456f-9cae-0eba1d88c11b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.868 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace which is not needed anymore#033[00m
Oct  2 08:10:33 np0005466030 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct  2 08:10:33 np0005466030 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Consumed 15.391s CPU time.
Oct  2 08:10:33 np0005466030 systemd-machined[188247]: Machine qemu-1-instance-00000003 terminated.
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.951 2 INFO nova.virt.libvirt.driver [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance destroyed successfully.#033[00m
Oct  2 08:10:33 np0005466030 nova_compute[230518]: 2025-10-02 12:10:33.952 2 DEBUG nova.objects.instance [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'resources' on Instance uuid 3affd040-669b-4cde-a697-00b991236a6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.052 2 DEBUG nova.virt.libvirt.vif [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-2',id=3,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:10:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:08Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=3affd040-669b-4cde-a697-00b991236a6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.052 2 DEBUG nova.network.os_vif_util [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.053 2 DEBUG nova.network.os_vif_util [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.053 2 DEBUG os_vif [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bdea026-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2268975107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.061 2 INFO os_vif [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36')#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.077 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.082 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.130 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:34 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : haproxy version is 2.8.14-c23fe91
Oct  2 08:10:34 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : path to executable is /usr/sbin/haproxy
Oct  2 08:10:34 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [WARNING]  (234001) : Exiting Master process...
Oct  2 08:10:34 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [WARNING]  (234001) : Exiting Master process...
Oct  2 08:10:34 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [ALERT]    (234001) : Current worker (234003) exited with code 143 (Terminated)
Oct  2 08:10:34 np0005466030 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [WARNING]  (234001) : All workers exited. Exiting... (0)
Oct  2 08:10:34 np0005466030 systemd[1]: libpod-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8.scope: Deactivated successfully.
Oct  2 08:10:34 np0005466030 podman[234371]: 2025-10-02 12:10:34.195671864 +0000 UTC m=+0.244852513 container died 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.245 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.245 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.293 2 DEBUG nova.compute.manager [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-unplugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.294 2 DEBUG oslo_concurrency.lockutils [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG oslo_concurrency.lockutils [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG oslo_concurrency.lockutils [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG nova.compute.manager [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] No waiting events found dispatching network-vif-unplugged-7bdea026-3636-4861-a8a9-fcb0a82509ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG nova.compute.manager [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-unplugged-7bdea026-3636-4861-a8a9-fcb0a82509ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:10:34 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8-userdata-shm.mount: Deactivated successfully.
Oct  2 08:10:34 np0005466030 systemd[1]: var-lib-containers-storage-overlay-887b11986ce41198ec71677adcd74a5ad1696c1e295797b15dba8f945811de2a-merged.mount: Deactivated successfully.
Oct  2 08:10:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:34.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:34 np0005466030 podman[234371]: 2025-10-02 12:10:34.685074409 +0000 UTC m=+0.734255058 container cleanup 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:10:34 np0005466030 systemd[1]: libpod-conmon-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8.scope: Deactivated successfully.
Oct  2 08:10:34 np0005466030 podman[234423]: 2025-10-02 12:10:34.794030706 +0000 UTC m=+0.089833977 container remove 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.801 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2746083-a9df-47dc-aabd-34f56df1a2ea]: (4, ('Thu Oct  2 12:10:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8)\n27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8\nThu Oct  2 12:10:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8)\n27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e95e319-347f-4f2c-b194-dd850feef900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.804 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:34 np0005466030 kernel: tapb4aadb38-80: left promiscuous mode
Oct  2 08:10:34 np0005466030 nova_compute[230518]: 2025-10-02 12:10:34.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.824 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0db048-2212-4b92-bb5d-6b5111fb1391]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.853 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[514383d8-2c0d-4afd-a6b3-5d7100932992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b69311-d288-4cec-9bcb-7b84e36db0da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95bd7105-78eb-4235-ab79-60a3472d266f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487510, 'reachable_time': 31960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234438, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.874 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:10:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.874 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b15dc31c-457f-456b-8082-62d0ca373ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:34 np0005466030 systemd[1]: run-netns-ovnmeta\x2db4aadb38\x2d89a4\x2d463f\x2db7b5\x2d8bb4dcce7d32.mount: Deactivated successfully.
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.240 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.241 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.241 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.241 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.246 2 INFO nova.virt.libvirt.driver [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deleting instance files /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c_del#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.246 2 INFO nova.virt.libvirt.driver [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deletion of /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c_del complete#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.293 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.293 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.294 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.294 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.295 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.366 2 INFO nova.compute.manager [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 1.65 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.367 2 DEBUG oslo.service.loopingcall [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.367 2 DEBUG nova.compute.manager [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.367 2 DEBUG nova.network.neutron [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.493 2 DEBUG nova.compute.manager [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.494 2 DEBUG oslo_concurrency.lockutils [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.494 2 DEBUG oslo_concurrency.lockutils [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.495 2 DEBUG oslo_concurrency.lockutils [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.495 2 DEBUG nova.compute.manager [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] No waiting events found dispatching network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.495 2 WARNING nova.compute.manager [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received unexpected event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.922 2 DEBUG nova.network.neutron [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:35 np0005466030 nova_compute[230518]: 2025-10-02 12:10:35.946 2 INFO nova.compute.manager [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 2.63 seconds to deallocate network for instance.#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.127 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.128 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.211 2 DEBUG oslo_concurrency.processutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.394 2 DEBUG nova.compute.manager [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG oslo_concurrency.lockutils [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG oslo_concurrency.lockutils [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG oslo_concurrency.lockutils [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG nova.compute.manager [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] No waiting events found dispatching network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.396 2 WARNING nova.compute.manager [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received unexpected event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:36.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:36.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2230665049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.705 2 DEBUG oslo_concurrency.processutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.711 2 DEBUG nova.compute.provider_tree [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.748 2 DEBUG nova.scheduler.client.report [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.900 2 DEBUG nova.network.neutron [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.902 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.931 2 INFO nova.compute.manager [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 1.56 seconds to deallocate network for instance.#033[00m
Oct  2 08:10:36 np0005466030 nova_compute[230518]: 2025-10-02 12:10:36.934 2 INFO nova.scheduler.client.report [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Deleted allocations for instance c6cef7fd-49cb-4781-97ad-027e835dcc5c#033[00m
Oct  2 08:10:37 np0005466030 nova_compute[230518]: 2025-10-02 12:10:37.380 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:37 np0005466030 nova_compute[230518]: 2025-10-02 12:10:37.380 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:37 np0005466030 nova_compute[230518]: 2025-10-02 12:10:37.424 2 DEBUG oslo_concurrency.processutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:37 np0005466030 nova_compute[230518]: 2025-10-02 12:10:37.615 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1891091328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:37 np0005466030 nova_compute[230518]: 2025-10-02 12:10:37.841 2 DEBUG oslo_concurrency.processutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:37 np0005466030 nova_compute[230518]: 2025-10-02 12:10:37.847 2 DEBUG nova.compute.provider_tree [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:38 np0005466030 nova_compute[230518]: 2025-10-02 12:10:38.402 2 DEBUG nova.compute.manager [req-95957d16-960a-4120-8313-bb921247d74f req-c1e2d8e5-5f34-4a75-91e0-754905d6bb9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-deleted-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:10:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:10:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:38.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:38 np0005466030 nova_compute[230518]: 2025-10-02 12:10:38.893 2 DEBUG nova.scheduler.client.report [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:38 np0005466030 nova_compute[230518]: 2025-10-02 12:10:38.922 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:38 np0005466030 nova_compute[230518]: 2025-10-02 12:10:38.927 2 DEBUG nova.compute.manager [req-25178c5a-f41f-4eda-b873-664d9a62c9cf req-1945fc10-f65e-4ad0-9106-a678caa455b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-deleted-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:38 np0005466030 nova_compute[230518]: 2025-10-02 12:10:38.970 2 INFO nova.scheduler.client.report [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Deleted allocations for instance 3affd040-669b-4cde-a697-00b991236a6c#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.056 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.730 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.731 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.753 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.845 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.846 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.853 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.854 2 INFO nova.compute.claims [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:10:39 np0005466030 nova_compute[230518]: 2025-10-02 12:10:39.940 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:10:40 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/764252114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.410 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.418 2 DEBUG nova.compute.provider_tree [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:10:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:40.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.439 2 DEBUG nova.scheduler.client.report [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.467 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.468 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.541 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.542 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.581 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:10:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:40.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.679 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:10:40 np0005466030 podman[234507]: 2025-10-02 12:10:40.806671186 +0000 UTC m=+0.051681677 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:10:40 np0005466030 podman[234506]: 2025-10-02 12:10:40.839469707 +0000 UTC m=+0.086618015 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.878 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.879 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.880 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Creating image(s)#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.906 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.931 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.961 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.967 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:40 np0005466030 nova_compute[230518]: 2025-10-02 12:10:40.995 2 DEBUG nova.policy [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '531ddb9812364f7b9743bd02a8ed797f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2c66662015f74444b15ea4b3d8644714', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.037 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.038 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.039 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.039 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.069 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.074 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e582fd0b-cd3a-4903-9ed3-024359954c81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:41 np0005466030 nova_compute[230518]: 2025-10-02 12:10:41.796 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Successfully created port: 2887bff5-92aa-4c83-9902-292a59e0add5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.281 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e582fd0b-cd3a-4903-9ed3-024359954c81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.342 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] resizing rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:10:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:42.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.500 2 DEBUG nova.objects.instance [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'migration_context' on Instance uuid e582fd0b-cd3a-4903-9ed3-024359954c81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.573 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.599 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.605 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.606 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.606 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.629 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.630 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.650 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Successfully updated port: 2887bff5-92aa-4c83-9902-292a59e0add5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.666 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.666 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquired lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.666 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.682 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.683 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.714 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.718 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:42 np0005466030 nova_compute[230518]: 2025-10-02 12:10:42.944 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.210 2 DEBUG nova.compute.manager [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.210 2 DEBUG nova.compute.manager [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing instance network info cache due to event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.211 2 DEBUG oslo_concurrency.lockutils [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.733 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.823 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Ensure instance console log exists: /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:43 np0005466030 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:44 np0005466030 nova_compute[230518]: 2025-10-02 12:10:44.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:44.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.457 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.599 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Releasing lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.600 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance network_info: |[{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.600 2 DEBUG oslo_concurrency.lockutils [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.600 2 DEBUG nova.network.neutron [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.604 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start _get_guest_xml network_info=[{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vdb', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'size': 1, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.609 2 WARNING nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.615 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.616 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.620 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.621 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.622 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.623 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:09:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1268098770',id=14,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-2032995918',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.624 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.624 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.624 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.625 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.625 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.625 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.626 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.626 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.626 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.627 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:10:45 np0005466030 nova_compute[230518]: 2025-10-02 12:10:45.630 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2705683967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:46 np0005466030 nova_compute[230518]: 2025-10-02 12:10:46.079 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:46 np0005466030 nova_compute[230518]: 2025-10-02 12:10:46.080 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:46.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:46 np0005466030 nova_compute[230518]: 2025-10-02 12:10:46.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/946783354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:46 np0005466030 nova_compute[230518]: 2025-10-02 12:10:46.513 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:46 np0005466030 nova_compute[230518]: 2025-10-02 12:10:46.535 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:46 np0005466030 nova_compute[230518]: 2025-10-02 12:10:46.539 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:10:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2087921985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.139 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.142 2 DEBUG nova.virt.libvirt.vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-425689073',id=7,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-nm6ufj5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=e582fd0b-cd3a-4903-9ed3-024359954c81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.143 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.144 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.146 2 DEBUG nova.objects.instance [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'pci_devices' on Instance uuid e582fd0b-cd3a-4903-9ed3-024359954c81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.326 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <uuid>e582fd0b-cd3a-4903-9ed3-024359954c81</uuid>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <name>instance-00000007</name>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-425689073</nova:name>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:10:45</nova:creationTime>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-2032995918">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:ephemeral>1</nova:ephemeral>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:user uuid="531ddb9812364f7b9743bd02a8ed797f">tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member</nova:user>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:project uuid="2c66662015f74444b15ea4b3d8644714">tempest-ServersWithSpecificFlavorTestJSON-957372394</nova:project>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <nova:port uuid="2887bff5-92aa-4c83-9902-292a59e0add5">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <entry name="serial">e582fd0b-cd3a-4903-9ed3-024359954c81</entry>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <entry name="uuid">e582fd0b-cd3a-4903-9ed3-024359954c81</entry>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/e582fd0b-cd3a-4903-9ed3-024359954c81_disk">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:53:c6:7c"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <target dev="tap2887bff5-92"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/console.log" append="off"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:10:47 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:10:47 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:10:47 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:10:47 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.327 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Preparing to wait for external event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.328 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.328 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.329 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.329 2 DEBUG nova.virt.libvirt.vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-425689073',id=7,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-nm6ufj5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=e582fd0b-cd3a-4903-9ed3-024359954c81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.330 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.330 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.331 2 DEBUG os_vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2887bff5-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2887bff5-92, col_values=(('external_ids', {'iface-id': '2887bff5-92aa-4c83-9902-292a59e0add5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:c6:7c', 'vm-uuid': 'e582fd0b-cd3a-4903-9ed3-024359954c81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:47 np0005466030 NetworkManager[44960]: <info>  [1759407047.3382] manager: (tap2887bff5-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.343 2 INFO os_vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92')#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.448 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407032.447705, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.449 2 INFO nova.compute.manager [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.669 2 DEBUG nova.compute.manager [None req-aa432e24-adcf-4944-befb-c8a0a8a61f45 - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.720 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.720 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.721 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.721 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No VIF found with MAC fa:16:3e:53:c6:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.722 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Using config drive#033[00m
Oct  2 08:10:47 np0005466030 nova_compute[230518]: 2025-10-02 12:10:47.747 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:48.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:48.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:48 np0005466030 nova_compute[230518]: 2025-10-02 12:10:48.945 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407033.9453151, 3affd040-669b-4cde-a697-00b991236a6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:48 np0005466030 nova_compute[230518]: 2025-10-02 12:10:48.946 2 INFO nova.compute.manager [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.037 2 DEBUG nova.compute.manager [None req-f2bc264d-d69f-4a90-bbab-9c0e78b11138 - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.334 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Creating config drive at /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.339 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9g0b7w4w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.475 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9g0b7w4w" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.501 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.505 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.615 2 DEBUG nova.network.neutron [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updated VIF entry in instance network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.615 2 DEBUG nova.network.neutron [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:10:49 np0005466030 nova_compute[230518]: 2025-10-02 12:10:49.782 2 DEBUG oslo_concurrency.lockutils [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.169 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.169 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deleting local config drive /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config because it was imported into RBD.#033[00m
Oct  2 08:10:50 np0005466030 kernel: tap2887bff5-92: entered promiscuous mode
Oct  2 08:10:50 np0005466030 NetworkManager[44960]: <info>  [1759407050.2346] manager: (tap2887bff5-92): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct  2 08:10:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:50Z|00046|binding|INFO|Claiming lport 2887bff5-92aa-4c83-9902-292a59e0add5 for this chassis.
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:50Z|00047|binding|INFO|2887bff5-92aa-4c83-9902-292a59e0add5: Claiming fa:16:3e:53:c6:7c 10.100.0.4
Oct  2 08:10:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:50Z|00048|binding|INFO|Setting lport 2887bff5-92aa-4c83-9902-292a59e0add5 ovn-installed in OVS
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:50 np0005466030 systemd-udevd[235004]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:10:50 np0005466030 systemd-machined[188247]: New machine qemu-3-instance-00000007.
Oct  2 08:10:50 np0005466030 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Oct  2 08:10:50 np0005466030 NetworkManager[44960]: <info>  [1759407050.2854] device (tap2887bff5-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:10:50 np0005466030 NetworkManager[44960]: <info>  [1759407050.2863] device (tap2887bff5-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:10:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:50Z|00049|binding|INFO|Setting lport 2887bff5-92aa-4c83-9902-292a59e0add5 up in Southbound
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.290 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c6:7c 10.100.0.4'], port_security=['fa:16:3e:53:c6:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e582fd0b-cd3a-4903-9ed3-024359954c81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2887bff5-92aa-4c83-9902-292a59e0add5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.291 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2887bff5-92aa-4c83-9902-292a59e0add5 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 bound to our chassis#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.294 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38c94475-c52a-421c-9bc8-95fdc649b043#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b13af214-6715-4855-a5d4-5215d7f44b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.307 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38c94475-c1 in ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.309 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38c94475-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.309 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4de73cd-4682-4182-b197-8add4c0130a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.310 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[051dcb55-dc4c-44b1-affa-1fd0586e31f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.323 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[65d073cb-93ca-4fba-b1d1-b81fe2c4e8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.377 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9c28f62a-a1fc-4df3-8aae-3335085b33d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.414 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9a69ad3d-071f-4694-811b-500b4adec33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.422 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1217883-cfe4-42ce-93de-014e2262c26b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 NetworkManager[44960]: <info>  [1759407050.4243] manager: (tap38c94475-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct  2 08:10:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:50.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.471 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8066e40a-f335-488e-8fc2-49e78341ff46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.478 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5451055f-a35b-4b98-a656-48c0fb371d00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 NetworkManager[44960]: <info>  [1759407050.5096] device (tap38c94475-c0): carrier: link connected
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.524 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[016cbe2e-4037-43f0-a6f0-ed7ad87d5fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.543 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca519e1-6135-4603-9785-347509379b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491406, 'reachable_time': 40938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235039, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.556 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[94a43f48-b708-4c24-ab01-07f5b7dffbb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:d299'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491406, 'tstamp': 491406}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235040, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.573 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[21a1027d-0080-40ce-889d-3c4f2f8790bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491406, 'reachable_time': 40938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235041, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.602 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89dfd6f5-7e14-4304-8802-b2cf58495f30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:50.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.674 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5eba45c1-e266-4438-ab07-54048abb2633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.676 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.676 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.677 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38c94475-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:50 np0005466030 kernel: tap38c94475-c0: entered promiscuous mode
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:50 np0005466030 NetworkManager[44960]: <info>  [1759407050.6814] manager: (tap38c94475-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.683 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38c94475-c0, col_values=(('external_ids', {'iface-id': 'cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:10:50Z|00050|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct  2 08:10:50 np0005466030 nova_compute[230518]: 2025-10-02 12:10:50.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.700 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.701 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68287536-4eaf-45fc-b18c-25654070630c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.702 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-38c94475-c52a-421c-9bc8-95fdc649b043
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 38c94475-c52a-421c-9bc8-95fdc649b043
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:10:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.703 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'env', 'PROCESS_TAG=haproxy-38c94475-c52a-421c-9bc8-95fdc649b043', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38c94475-c52a-421c-9bc8-95fdc649b043.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:10:51 np0005466030 podman[235105]: 2025-10-02 12:10:51.1539104 +0000 UTC m=+0.070123897 container create 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:10:51 np0005466030 systemd[1]: Started libpod-conmon-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c.scope.
Oct  2 08:10:51 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:10:51 np0005466030 podman[235105]: 2025-10-02 12:10:51.11099904 +0000 UTC m=+0.027212557 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:10:51 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a0e6f1bf3ec86e5bc2215d1c862c93a70af47fd512dbd4ba72e6130027c198/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:10:51 np0005466030 podman[235105]: 2025-10-02 12:10:51.297451124 +0000 UTC m=+0.213664651 container init 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:10:51 np0005466030 podman[235143]: 2025-10-02 12:10:51.300464769 +0000 UTC m=+0.120228523 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:10:51 np0005466030 podman[235105]: 2025-10-02 12:10:51.303760543 +0000 UTC m=+0.219974040 container start 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:10:51 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : New worker (235175) forked
Oct  2 08:10:51 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : Loading success.
Oct  2 08:10:51 np0005466030 nova_compute[230518]: 2025-10-02 12:10:51.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:51 np0005466030 nova_compute[230518]: 2025-10-02 12:10:51.680 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407051.6799974, e582fd0b-cd3a-4903-9ed3-024359954c81 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:51 np0005466030 nova_compute[230518]: 2025-10-02 12:10:51.681 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Started (Lifecycle Event)#033[00m
Oct  2 08:10:52 np0005466030 nova_compute[230518]: 2025-10-02 12:10:52.033 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:52 np0005466030 nova_compute[230518]: 2025-10-02 12:10:52.038 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407051.6800942, e582fd0b-cd3a-4903-9ed3-024359954c81 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:52 np0005466030 nova_compute[230518]: 2025-10-02 12:10:52.038 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:10:52 np0005466030 nova_compute[230518]: 2025-10-02 12:10:52.313 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:52 np0005466030 nova_compute[230518]: 2025-10-02 12:10:52.316 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:52 np0005466030 nova_compute[230518]: 2025-10-02 12:10:52.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:52.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:52 np0005466030 nova_compute[230518]: 2025-10-02 12:10:52.569 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:52.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:54.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:54 np0005466030 podman[235185]: 2025-10-02 12:10:54.798320693 +0000 UTC m=+0.051336706 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.881 2 DEBUG nova.compute.manager [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.881 2 DEBUG oslo_concurrency.lockutils [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG oslo_concurrency.lockutils [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG oslo_concurrency.lockutils [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG nova.compute.manager [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Processing event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.885 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407055.885355, e582fd0b-cd3a-4903-9ed3-024359954c81 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.885 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.887 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.890 2 INFO nova.virt.libvirt.driver [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance spawned successfully.#033[00m
Oct  2 08:10:55 np0005466030 nova_compute[230518]: 2025-10-02 12:10:55.890 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.211 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.216 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.216 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.217 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.217 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.217 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.218 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.222 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:10:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:56.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.454 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:10:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:56.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.857 2 INFO nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 15.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:10:56 np0005466030 nova_compute[230518]: 2025-10-02 12:10:56.858 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:10:57 np0005466030 nova_compute[230518]: 2025-10-02 12:10:57.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:10:57 np0005466030 nova_compute[230518]: 2025-10-02 12:10:57.426 2 INFO nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 17.60 seconds to build instance.#033[00m
Oct  2 08:10:57 np0005466030 nova_compute[230518]: 2025-10-02 12:10:57.505 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:10:58 np0005466030 nova_compute[230518]: 2025-10-02 12:10:58.031 2 DEBUG nova.compute.manager [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:10:58 np0005466030 nova_compute[230518]: 2025-10-02 12:10:58.031 2 DEBUG oslo_concurrency.lockutils [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:10:58 np0005466030 nova_compute[230518]: 2025-10-02 12:10:58.032 2 DEBUG oslo_concurrency.lockutils [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:10:58 np0005466030 nova_compute[230518]: 2025-10-02 12:10:58.032 2 DEBUG oslo_concurrency.lockutils [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:10:58 np0005466030 nova_compute[230518]: 2025-10-02 12:10:58.032 2 DEBUG nova.compute.manager [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] No waiting events found dispatching network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:10:58 np0005466030 nova_compute[230518]: 2025-10-02 12:10:58.032 2 WARNING nova.compute.manager [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received unexpected event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:10:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:58.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:10:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:10:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:10:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:58.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:00.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:11:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:11:00 np0005466030 nova_compute[230518]: 2025-10-02 12:11:00.728 2 DEBUG nova.compute.manager [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:00 np0005466030 nova_compute[230518]: 2025-10-02 12:11:00.729 2 DEBUG nova.compute.manager [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing instance network info cache due to event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:11:00 np0005466030 nova_compute[230518]: 2025-10-02 12:11:00.730 2 DEBUG oslo_concurrency.lockutils [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:11:00 np0005466030 nova_compute[230518]: 2025-10-02 12:11:00.730 2 DEBUG oslo_concurrency.lockutils [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:11:00 np0005466030 nova_compute[230518]: 2025-10-02 12:11:00.730 2 DEBUG nova.network.neutron [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:11:01 np0005466030 nova_compute[230518]: 2025-10-02 12:11:01.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:11:02Z|00051|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct  2 08:11:02 np0005466030 nova_compute[230518]: 2025-10-02 12:11:02.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:02 np0005466030 nova_compute[230518]: 2025-10-02 12:11:02.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:02.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:11:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:11:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:03 np0005466030 nova_compute[230518]: 2025-10-02 12:11:03.674 2 DEBUG nova.network.neutron [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updated VIF entry in instance network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:11:03 np0005466030 nova_compute[230518]: 2025-10-02 12:11:03.675 2 DEBUG nova.network.neutron [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:03 np0005466030 nova_compute[230518]: 2025-10-02 12:11:03.694 2 DEBUG oslo_concurrency.lockutils [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:11:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:11:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:11:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:06 np0005466030 nova_compute[230518]: 2025-10-02 12:11:06.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:06 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct  2 08:11:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:06.891070) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:11:06 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct  2 08:11:06 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066891137, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 902, "num_deletes": 255, "total_data_size": 1648262, "memory_usage": 1666648, "flush_reason": "Manual Compaction"}
Oct  2 08:11:06 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067027090, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1077109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21787, "largest_seqno": 22684, "table_properties": {"data_size": 1072939, "index_size": 1822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9373, "raw_average_key_size": 18, "raw_value_size": 1064325, "raw_average_value_size": 2150, "num_data_blocks": 80, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407011, "oldest_key_time": 1759407011, "file_creation_time": 1759407066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 136063 microseconds, and 3615 cpu microseconds.
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.027140) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1077109 bytes OK
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.027162) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.081672) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.081744) EVENT_LOG_v1 {"time_micros": 1759407067081731, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.081777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1643601, prev total WAL file size 1643601, number of live WAL files 2.
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.083092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1051KB)], [42(8173KB)]
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067083186, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 9447160, "oldest_snapshot_seqno": -1}
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 4641 keys, 9303050 bytes, temperature: kUnknown
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067160659, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 9303050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9270616, "index_size": 19716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116700, "raw_average_key_size": 25, "raw_value_size": 9185165, "raw_average_value_size": 1979, "num_data_blocks": 814, "num_entries": 4641, "num_filter_entries": 4641, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.161039) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9303050 bytes
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.164529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.1 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.0 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(17.4) write-amplify(8.6) OK, records in: 5170, records dropped: 529 output_compression: NoCompression
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.164568) EVENT_LOG_v1 {"time_micros": 1759407067164553, "job": 24, "event": "compaction_finished", "compaction_time_micros": 77377, "compaction_time_cpu_micros": 24169, "output_level": 6, "num_output_files": 1, "total_output_size": 9303050, "num_input_records": 5170, "num_output_records": 4641, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067164938, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067166144, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.082652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:07 np0005466030 nova_compute[230518]: 2025-10-02 12:11:07.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct  2 08:11:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:08.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.769708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068769745, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 292, "num_deletes": 251, "total_data_size": 94560, "memory_usage": 100888, "flush_reason": "Manual Compaction"}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068778535, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 61781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22689, "largest_seqno": 22976, "table_properties": {"data_size": 59852, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5051, "raw_average_key_size": 18, "raw_value_size": 56007, "raw_average_value_size": 204, "num_data_blocks": 7, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407067, "oldest_key_time": 1759407067, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 8881 microseconds, and 1116 cpu microseconds.
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.778585) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 61781 bytes OK
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.778605) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782485) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782508) EVENT_LOG_v1 {"time_micros": 1759407068782501, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782528) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 92397, prev total WAL file size 92397, number of live WAL files 2.
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(60KB)], [45(9085KB)]
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068783002, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 9364831, "oldest_snapshot_seqno": -1}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4402 keys, 7343207 bytes, temperature: kUnknown
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068827981, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 7343207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7314085, "index_size": 17044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 112424, "raw_average_key_size": 25, "raw_value_size": 7234423, "raw_average_value_size": 1643, "num_data_blocks": 693, "num_entries": 4402, "num_filter_entries": 4402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.828192) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7343207 bytes
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.834111) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.9 rd, 163.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(270.4) write-amplify(118.9) OK, records in: 4915, records dropped: 513 output_compression: NoCompression
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.834155) EVENT_LOG_v1 {"time_micros": 1759407068834139, "job": 26, "event": "compaction_finished", "compaction_time_micros": 45043, "compaction_time_cpu_micros": 16688, "output_level": 6, "num_output_files": 1, "total_output_size": 7343207, "num_input_records": 4915, "num_output_records": 4402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068834357, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068835671, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:11:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:11 np0005466030 nova_compute[230518]: 2025-10-02 12:11:11.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:11 np0005466030 podman[235205]: 2025-10-02 12:11:11.827631206 +0000 UTC m=+0.075852507 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:11:11 np0005466030 podman[235204]: 2025-10-02 12:11:11.865167996 +0000 UTC m=+0.114580885 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:11:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:11:12Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:c6:7c 10.100.0.4
Oct  2 08:11:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:11:12Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:c6:7c 10.100.0.4
Oct  2 08:11:12 np0005466030 nova_compute[230518]: 2025-10-02 12:11:12.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct  2 08:11:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:12.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:16 np0005466030 nova_compute[230518]: 2025-10-02 12:11:16.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.239 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.241 2 INFO nova.compute.manager [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Terminating instance#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.242 2 DEBUG nova.compute.manager [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:11:17 np0005466030 kernel: tap2887bff5-92 (unregistering): left promiscuous mode
Oct  2 08:11:17 np0005466030 NetworkManager[44960]: <info>  [1759407077.3379] device (tap2887bff5-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:11:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:11:17Z|00052|binding|INFO|Releasing lport 2887bff5-92aa-4c83-9902-292a59e0add5 from this chassis (sb_readonly=0)
Oct  2 08:11:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:11:17Z|00053|binding|INFO|Setting lport 2887bff5-92aa-4c83-9902-292a59e0add5 down in Southbound
Oct  2 08:11:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:11:17Z|00054|binding|INFO|Removing iface tap2887bff5-92 ovn-installed in OVS
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.356 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c6:7c 10.100.0.4'], port_security=['fa:16:3e:53:c6:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e582fd0b-cd3a-4903-9ed3-024359954c81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2887bff5-92aa-4c83-9902-292a59e0add5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2887bff5-92aa-4c83-9902-292a59e0add5 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 unbound from our chassis#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.360 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38c94475-c52a-421c-9bc8-95fdc649b043, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.361 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c204c5e-af3e-41ec-b44b-16c38b8bd9db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.361 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace which is not needed anymore#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct  2 08:11:17 np0005466030 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 14.899s CPU time.
Oct  2 08:11:17 np0005466030 systemd-machined[188247]: Machine qemu-3-instance-00000007 terminated.
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.476 2 INFO nova.virt.libvirt.driver [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance destroyed successfully.#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.477 2 DEBUG nova.objects.instance [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'resources' on Instance uuid e582fd0b-cd3a-4903-9ed3-024359954c81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:17 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : haproxy version is 2.8.14-c23fe91
Oct  2 08:11:17 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : path to executable is /usr/sbin/haproxy
Oct  2 08:11:17 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [WARNING]  (235173) : Exiting Master process...
Oct  2 08:11:17 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [ALERT]    (235173) : Current worker (235175) exited with code 143 (Terminated)
Oct  2 08:11:17 np0005466030 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [WARNING]  (235173) : All workers exited. Exiting... (0)
Oct  2 08:11:17 np0005466030 systemd[1]: libpod-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c.scope: Deactivated successfully.
Oct  2 08:11:17 np0005466030 podman[235272]: 2025-10-02 12:11:17.508596153 +0000 UTC m=+0.065187011 container died 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.537 2 DEBUG nova.virt.libvirt.vif [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:10:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-425689073',id=7,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-nm6ufj5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=e582fd0b-cd3a-4903-9ed3-024359954c81,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.539 2 DEBUG nova.network.os_vif_util [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.539 2 DEBUG nova.network.os_vif_util [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.540 2 DEBUG os_vif [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:11:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c-userdata-shm.mount: Deactivated successfully.
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2887bff5-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay-f7a0e6f1bf3ec86e5bc2215d1c862c93a70af47fd512dbd4ba72e6130027c198-merged.mount: Deactivated successfully.
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.548 2 INFO os_vif [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92')#033[00m
Oct  2 08:11:17 np0005466030 podman[235272]: 2025-10-02 12:11:17.569101527 +0000 UTC m=+0.125692385 container cleanup 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.575 2 DEBUG nova.compute.manager [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-unplugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.575 2 DEBUG oslo_concurrency.lockutils [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.576 2 DEBUG oslo_concurrency.lockutils [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.576 2 DEBUG oslo_concurrency.lockutils [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.576 2 DEBUG nova.compute.manager [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] No waiting events found dispatching network-vif-unplugged-2887bff5-92aa-4c83-9902-292a59e0add5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.577 2 DEBUG nova.compute.manager [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-unplugged-2887bff5-92aa-4c83-9902-292a59e0add5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:11:17 np0005466030 systemd[1]: libpod-conmon-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c.scope: Deactivated successfully.
Oct  2 08:11:17 np0005466030 podman[235326]: 2025-10-02 12:11:17.634227336 +0000 UTC m=+0.042974364 container remove 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.641 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1fce8f11-e16b-40c3-a552-4eb770f3d9f2]: (4, ('Thu Oct  2 12:11:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c)\n5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c\nThu Oct  2 12:11:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c)\n5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.642 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f98c25fc-6bd1-4c7c-afa0-2ccb663e44d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.643 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 kernel: tap38c94475-c0: left promiscuous mode
Oct  2 08:11:17 np0005466030 nova_compute[230518]: 2025-10-02 12:11:17.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.661 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b071796-6b47-4ef0-89cf-f8c38245ba7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.693 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44bb9dd9-22a4-4959-b672-ce36eb7611ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.694 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a8275bbf-05de-47b1-b2cb-4a588dc3baaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.709 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[706a8fce-7cdd-4fb2-9f9a-f79a7319577d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491396, 'reachable_time': 21904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235343, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.713 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:11:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.713 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[55c8badf-439b-4dff-a684-c517a1dbb689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:11:17 np0005466030 systemd[1]: run-netns-ovnmeta\x2d38c94475\x2dc52a\x2d421c\x2d9bc8\x2d95fdc649b043.mount: Deactivated successfully.
Oct  2 08:11:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:18 np0005466030 nova_compute[230518]: 2025-10-02 12:11:18.883 2 INFO nova.virt.libvirt.driver [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deleting instance files /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81_del#033[00m
Oct  2 08:11:18 np0005466030 nova_compute[230518]: 2025-10-02 12:11:18.884 2 INFO nova.virt.libvirt.driver [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deletion of /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81_del complete#033[00m
Oct  2 08:11:18 np0005466030 nova_compute[230518]: 2025-10-02 12:11:18.963 2 INFO nova.compute.manager [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 1.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:11:18 np0005466030 nova_compute[230518]: 2025-10-02 12:11:18.964 2 DEBUG oslo.service.loopingcall [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:11:18 np0005466030 nova_compute[230518]: 2025-10-02 12:11:18.964 2 DEBUG nova.compute.manager [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:11:18 np0005466030 nova_compute[230518]: 2025-10-02 12:11:18.964 2 DEBUG nova.network.neutron [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:11:19 np0005466030 nova_compute[230518]: 2025-10-02 12:11:19.707 2 DEBUG nova.compute.manager [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:19 np0005466030 nova_compute[230518]: 2025-10-02 12:11:19.707 2 DEBUG oslo_concurrency.lockutils [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:19 np0005466030 nova_compute[230518]: 2025-10-02 12:11:19.707 2 DEBUG oslo_concurrency.lockutils [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:19 np0005466030 nova_compute[230518]: 2025-10-02 12:11:19.708 2 DEBUG oslo_concurrency.lockutils [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:19 np0005466030 nova_compute[230518]: 2025-10-02 12:11:19.708 2 DEBUG nova.compute.manager [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] No waiting events found dispatching network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:11:19 np0005466030 nova_compute[230518]: 2025-10-02 12:11:19.708 2 WARNING nova.compute.manager [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received unexpected event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:11:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:19.711 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:11:19 np0005466030 nova_compute[230518]: 2025-10-02 12:11:19.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:19.712 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.357 2 DEBUG nova.network.neutron [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.376 2 INFO nova.compute.manager [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 1.41 seconds to deallocate network for instance.#033[00m
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.422 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.423 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.470 2 DEBUG nova.compute.manager [req-dd9c7a07-8051-4ab7-ad8d-75e75569803a req-064c0054-b718-47ad-bab5-34db0ee8242a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-deleted-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:11:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.485 2 DEBUG oslo_concurrency.processutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1749779519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.971 2 DEBUG oslo_concurrency.processutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:20 np0005466030 nova_compute[230518]: 2025-10-02 12:11:20.976 2 DEBUG nova.compute.provider_tree [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:21 np0005466030 nova_compute[230518]: 2025-10-02 12:11:21.009 2 DEBUG nova.scheduler.client.report [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:21 np0005466030 nova_compute[230518]: 2025-10-02 12:11:21.033 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:21 np0005466030 nova_compute[230518]: 2025-10-02 12:11:21.057 2 INFO nova.scheduler.client.report [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Deleted allocations for instance e582fd0b-cd3a-4903-9ed3-024359954c81#033[00m
Oct  2 08:11:21 np0005466030 nova_compute[230518]: 2025-10-02 12:11:21.117 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:21 np0005466030 nova_compute[230518]: 2025-10-02 12:11:21.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:21 np0005466030 nova_compute[230518]: 2025-10-02 12:11:21.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Oct  2 08:11:21 np0005466030 podman[235367]: 2025-10-02 12:11:21.819155572 +0000 UTC m=+0.062722895 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:11:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:22.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:22 np0005466030 nova_compute[230518]: 2025-10-02 12:11:22.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:11:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:11:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:11:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:24.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:24.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:25 np0005466030 nova_compute[230518]: 2025-10-02 12:11:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:25 np0005466030 podman[235519]: 2025-10-02 12:11:25.80608725 +0000 UTC m=+0.059033667 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct  2 08:11:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:25.910 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:25.910 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:25.911 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:26.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:26 np0005466030 nova_compute[230518]: 2025-10-02 12:11:26.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:11:26.714 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:11:27 np0005466030 nova_compute[230518]: 2025-10-02 12:11:27.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:28 np0005466030 nova_compute[230518]: 2025-10-02 12:11:28.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:28 np0005466030 nova_compute[230518]: 2025-10-02 12:11:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:11:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:28.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:28.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:28 np0005466030 nova_compute[230518]: 2025-10-02 12:11:28.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:29 np0005466030 nova_compute[230518]: 2025-10-02 12:11:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:29 np0005466030 nova_compute[230518]: 2025-10-02 12:11:29.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:30 np0005466030 nova_compute[230518]: 2025-10-02 12:11:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:31 np0005466030 nova_compute[230518]: 2025-10-02 12:11:31.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:31 np0005466030 nova_compute[230518]: 2025-10-02 12:11:31.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:11:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.473 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407077.47298, e582fd0b-cd3a-4903-9ed3-024359954c81 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.474 2 INFO nova.compute.manager [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:11:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:32.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2323166239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.532 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:32.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.701 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.702 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5021MB free_disk=20.964763641357422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:32 np0005466030 nova_compute[230518]: 2025-10-02 12:11:32.763 2 DEBUG nova.compute.manager [None req-a7c63f2d-ea9b-4bd5-93d1-5d640b3ce89b - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.148 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/51653874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.706 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.711 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.765 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.910 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:11:33 np0005466030 nova_compute[230518]: 2025-10-02 12:11:33.911 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:34.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:34 np0005466030 nova_compute[230518]: 2025-10-02 12:11:34.880 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:34 np0005466030 nova_compute[230518]: 2025-10-02 12:11:34.881 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:34 np0005466030 nova_compute[230518]: 2025-10-02 12:11:34.881 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:11:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:36 np0005466030 nova_compute[230518]: 2025-10-02 12:11:36.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:37 np0005466030 nova_compute[230518]: 2025-10-02 12:11:37.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:38.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:40.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:41 np0005466030 nova_compute[230518]: 2025-10-02 12:11:41.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.410 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.411 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.484 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:11:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.697 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.698 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.707 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.707 2 INFO nova.compute.claims [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:11:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:42 np0005466030 podman[235636]: 2025-10-02 12:11:42.792078134 +0000 UTC m=+0.048460166 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:11:42 np0005466030 podman[235635]: 2025-10-02 12:11:42.816791821 +0000 UTC m=+0.074961119 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:11:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:42 np0005466030 nova_compute[230518]: 2025-10-02 12:11:42.997 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3263702750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.403 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.413 2 DEBUG nova.compute.provider_tree [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.445 2 DEBUG nova.scheduler.client.report [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.521 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.522 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.713 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.714 2 DEBUG nova.network.neutron [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.771 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:11:43 np0005466030 nova_compute[230518]: 2025-10-02 12:11:43.857 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.290 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.293 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.294 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Creating image(s)#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.331 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.358 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.384 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.388 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.442 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.443 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.443 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.443 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.466 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.470 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:44.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.564 2 DEBUG nova.network.neutron [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.565 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:11:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.739 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.819 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] resizing rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:11:44 np0005466030 nova_compute[230518]: 2025-10-02 12:11:44.945 2 DEBUG nova.objects.instance [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'migration_context' on Instance uuid cdf18af7-3741-4849-8c50-edeb7cc6b5b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.022 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.023 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Ensure instance console log exists: /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.023 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.024 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.024 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.025 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.029 2 WARNING nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.040 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.041 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.046 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.046 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.047 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.047 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.048 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.048 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.050 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.050 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.050 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.052 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:11:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4248775936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.477 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.505 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.510 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:11:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2283155084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.966 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:45 np0005466030 nova_compute[230518]: 2025-10-02 12:11:45.968 2 DEBUG nova.objects.instance [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cdf18af7-3741-4849-8c50-edeb7cc6b5b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:46 np0005466030 nova_compute[230518]: 2025-10-02 12:11:46.211 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <uuid>cdf18af7-3741-4849-8c50-edeb7cc6b5b7</uuid>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <name>instance-0000000a</name>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-609215018</nova:name>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:11:45</nova:creationTime>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <nova:user uuid="2cdfea5c8e074c59b963b1fba6b35e1f">tempest-DeleteServersAdminTestJSON-98667439-project-member</nova:user>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <nova:project uuid="14444ba992464a08be0b7dc7a5dd00c2">tempest-DeleteServersAdminTestJSON-98667439</nova:project>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <entry name="serial">cdf18af7-3741-4849-8c50-edeb7cc6b5b7</entry>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <entry name="uuid">cdf18af7-3741-4849-8c50-edeb7cc6b5b7</entry>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/console.log" append="off"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:11:46 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:11:46 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:11:46 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:11:46 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:11:46 np0005466030 nova_compute[230518]: 2025-10-02 12:11:46.263 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:11:46 np0005466030 nova_compute[230518]: 2025-10-02 12:11:46.264 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:11:46 np0005466030 nova_compute[230518]: 2025-10-02 12:11:46.264 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Using config drive#033[00m
Oct  2 08:11:46 np0005466030 nova_compute[230518]: 2025-10-02 12:11:46.312 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:46.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:46 np0005466030 nova_compute[230518]: 2025-10-02 12:11:46.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:47 np0005466030 nova_compute[230518]: 2025-10-02 12:11:47.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:48 np0005466030 nova_compute[230518]: 2025-10-02 12:11:48.269 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Creating config drive at /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config#033[00m
Oct  2 08:11:48 np0005466030 nova_compute[230518]: 2025-10-02 12:11:48.273 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4_1orimb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:48 np0005466030 nova_compute[230518]: 2025-10-02 12:11:48.403 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4_1orimb" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:48 np0005466030 nova_compute[230518]: 2025-10-02 12:11:48.434 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:11:48 np0005466030 nova_compute[230518]: 2025-10-02 12:11:48.438 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:48 np0005466030 nova_compute[230518]: 2025-10-02 12:11:48.587 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:48 np0005466030 nova_compute[230518]: 2025-10-02 12:11:48.588 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deleting local config drive /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config because it was imported into RBD.#033[00m
Oct  2 08:11:48 np0005466030 systemd-machined[188247]: New machine qemu-4-instance-0000000a.
Oct  2 08:11:48 np0005466030 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Oct  2 08:11:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.767 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407109.7670376, cdf18af7-3741-4849-8c50-edeb7cc6b5b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.769 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.772 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.772 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.775 2 INFO nova.virt.libvirt.driver [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance spawned successfully.#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.776 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.803 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.810 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.814 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.814 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.815 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.815 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.816 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.816 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.857 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.858 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407109.7685792, cdf18af7-3741-4849-8c50-edeb7cc6b5b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.858 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] VM Started (Lifecycle Event)#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.891 2 INFO nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 5.60 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.892 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.898 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.901 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.933 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.951 2 INFO nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 7.32 seconds to build instance.#033[00m
Oct  2 08:11:49 np0005466030 nova_compute[230518]: 2025-10-02 12:11:49.967 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:50.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:51 np0005466030 nova_compute[230518]: 2025-10-02 12:11:51.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:52.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:52 np0005466030 nova_compute[230518]: 2025-10-02 12:11:52.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:52.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:52 np0005466030 podman[236045]: 2025-10-02 12:11:52.841078376 +0000 UTC m=+0.091178799 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:11:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.767 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.767 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.768 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.768 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.768 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.769 2 INFO nova.compute.manager [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Terminating instance#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.770 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "refresh_cache-cdf18af7-3741-4849-8c50-edeb7cc6b5b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.770 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquired lock "refresh_cache-cdf18af7-3741-4849-8c50-edeb7cc6b5b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:11:53 np0005466030 nova_compute[230518]: 2025-10-02 12:11:53.770 2 DEBUG nova.network.neutron [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:11:54 np0005466030 nova_compute[230518]: 2025-10-02 12:11:54.347 2 DEBUG nova.network.neutron [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:11:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:54 np0005466030 nova_compute[230518]: 2025-10-02 12:11:54.697 2 DEBUG nova.network.neutron [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:54 np0005466030 nova_compute[230518]: 2025-10-02 12:11:54.722 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Releasing lock "refresh_cache-cdf18af7-3741-4849-8c50-edeb7cc6b5b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:11:54 np0005466030 nova_compute[230518]: 2025-10-02 12:11:54.723 2 DEBUG nova.compute.manager [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:11:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:54.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:54 np0005466030 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  2 08:11:54 np0005466030 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 6.147s CPU time.
Oct  2 08:11:54 np0005466030 systemd-machined[188247]: Machine qemu-4-instance-0000000a terminated.
Oct  2 08:11:54 np0005466030 nova_compute[230518]: 2025-10-02 12:11:54.947 2 INFO nova.virt.libvirt.driver [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance destroyed successfully.#033[00m
Oct  2 08:11:54 np0005466030 nova_compute[230518]: 2025-10-02 12:11:54.948 2 DEBUG nova.objects.instance [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'resources' on Instance uuid cdf18af7-3741-4849-8c50-edeb7cc6b5b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.553 2 INFO nova.virt.libvirt.driver [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deleting instance files /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_del#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.553 2 INFO nova.virt.libvirt.driver [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deletion of /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_del complete#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.683 2 INFO nova.compute.manager [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.684 2 DEBUG oslo.service.loopingcall [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.684 2 DEBUG nova.compute.manager [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.684 2 DEBUG nova.network.neutron [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.929 2 DEBUG nova.network.neutron [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:11:55 np0005466030 nova_compute[230518]: 2025-10-02 12:11:55.957 2 DEBUG nova.network.neutron [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.008 2 INFO nova.compute.manager [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 0.32 seconds to deallocate network for instance.#033[00m
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.085 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.086 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.178 2 DEBUG oslo_concurrency.processutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:11:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:11:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2787303180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.626 2 DEBUG oslo_concurrency.processutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.631 2 DEBUG nova.compute.provider_tree [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.651 2 DEBUG nova.scheduler.client.report [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.681 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.743 2 INFO nova.scheduler.client.report [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Deleted allocations for instance cdf18af7-3741-4849-8c50-edeb7cc6b5b7#033[00m
Oct  2 08:11:56 np0005466030 podman[236109]: 2025-10-02 12:11:56.78933762 +0000 UTC m=+0.047097374 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:11:56 np0005466030 nova_compute[230518]: 2025-10-02 12:11:56.861 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:11:57 np0005466030 nova_compute[230518]: 2025-10-02 12:11:57.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:11:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:11:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:11:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:58.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:11:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:11:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:11:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:58.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:00.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:12:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:12:01 np0005466030 nova_compute[230518]: 2025-10-02 12:12:01.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:02 np0005466030 nova_compute[230518]: 2025-10-02 12:12:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:02.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:02.911 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:02 np0005466030 nova_compute[230518]: 2025-10-02 12:12:02.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:02.912 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:12:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:04.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:04.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:06 np0005466030 nova_compute[230518]: 2025-10-02 12:12:06.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:06.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:07 np0005466030 nova_compute[230518]: 2025-10-02 12:12:07.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:08.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:08.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:09 np0005466030 nova_compute[230518]: 2025-10-02 12:12:09.945 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407114.9434974, cdf18af7-3741-4849-8c50-edeb7cc6b5b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:12:09 np0005466030 nova_compute[230518]: 2025-10-02 12:12:09.945 2 INFO nova.compute.manager [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:12:09 np0005466030 nova_compute[230518]: 2025-10-02 12:12:09.978 2 DEBUG nova.compute.manager [None req-e948ab94-18b4-45cc-9ee8-ea4f4050fd92 - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:12:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:10.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:10.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:11 np0005466030 nova_compute[230518]: 2025-10-02 12:12:11.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:12.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:12 np0005466030 nova_compute[230518]: 2025-10-02 12:12:12.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:12.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:12.913 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:13 np0005466030 podman[236130]: 2025-10-02 12:12:13.809107501 +0000 UTC m=+0.053680100 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:12:13 np0005466030 podman[236129]: 2025-10-02 12:12:13.829039478 +0000 UTC m=+0.083076775 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:12:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:14.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:16 np0005466030 nova_compute[230518]: 2025-10-02 12:12:16.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:16.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:16.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:17 np0005466030 nova_compute[230518]: 2025-10-02 12:12:17.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:18.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:20.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:20.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:21 np0005466030 nova_compute[230518]: 2025-10-02 12:12:21.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:22 np0005466030 nova_compute[230518]: 2025-10-02 12:12:22.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:22.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:23 np0005466030 podman[236176]: 2025-10-02 12:12:23.805967493 +0000 UTC m=+0.059930846 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  2 08:12:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:24.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:24.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:25.911 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:25.912 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:25.912 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:26 np0005466030 nova_compute[230518]: 2025-10-02 12:12:26.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:26.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:26.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:27 np0005466030 nova_compute[230518]: 2025-10-02 12:12:27.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:27 np0005466030 podman[236198]: 2025-10-02 12:12:27.805228549 +0000 UTC m=+0.059863485 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid)
Oct  2 08:12:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:28.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:28.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:29 np0005466030 nova_compute[230518]: 2025-10-02 12:12:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:29 np0005466030 nova_compute[230518]: 2025-10-02 12:12:29.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:12:30 np0005466030 nova_compute[230518]: 2025-10-02 12:12:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:30.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:31 np0005466030 nova_compute[230518]: 2025-10-02 12:12:31.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:31 np0005466030 nova_compute[230518]: 2025-10-02 12:12:31.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:32 np0005466030 nova_compute[230518]: 2025-10-02 12:12:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:32 np0005466030 nova_compute[230518]: 2025-10-02 12:12:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:12:32 np0005466030 nova_compute[230518]: 2025-10-02 12:12:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:12:32 np0005466030 nova_compute[230518]: 2025-10-02 12:12:32.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:12:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:32.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:32 np0005466030 nova_compute[230518]: 2025-10-02 12:12:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:32.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:32 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:12:32 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:12:32 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:12:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:33 np0005466030 nova_compute[230518]: 2025-10-02 12:12:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.120 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.120 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.120 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.121 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:12:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/398630855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.525 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:34.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.685 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.686 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5003MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.686 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.687 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:12:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.838 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.838 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:12:34 np0005466030 nova_compute[230518]: 2025-10-02 12:12:34.856 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:12:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:12:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1190854149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:12:35 np0005466030 nova_compute[230518]: 2025-10-02 12:12:35.289 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:12:35 np0005466030 nova_compute[230518]: 2025-10-02 12:12:35.293 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:12:35 np0005466030 nova_compute[230518]: 2025-10-02 12:12:35.343 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:12:35 np0005466030 nova_compute[230518]: 2025-10-02 12:12:35.378 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:12:35 np0005466030 nova_compute[230518]: 2025-10-02 12:12:35.378 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:12:36 np0005466030 nova_compute[230518]: 2025-10-02 12:12:36.373 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:36 np0005466030 nova_compute[230518]: 2025-10-02 12:12:36.374 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:12:36 np0005466030 nova_compute[230518]: 2025-10-02 12:12:36.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:36.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:37 np0005466030 nova_compute[230518]: 2025-10-02 12:12:37.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:40.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:41 np0005466030 nova_compute[230518]: 2025-10-02 12:12:41.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:41 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:12:41 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:12:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:12:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:12:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:42 np0005466030 nova_compute[230518]: 2025-10-02 12:12:42.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:42.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:43.365 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:12:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:43.366 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:12:43 np0005466030 nova_compute[230518]: 2025-10-02 12:12:43.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:12:44.369 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:12:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:44.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:44.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:44 np0005466030 podman[236445]: 2025-10-02 12:12:44.798050053 +0000 UTC m=+0.049349962 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:12:44 np0005466030 podman[236444]: 2025-10-02 12:12:44.849026698 +0000 UTC m=+0.100723330 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:12:46 np0005466030 nova_compute[230518]: 2025-10-02 12:12:46.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:46.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:46.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:47 np0005466030 nova_compute[230518]: 2025-10-02 12:12:47.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:48.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:12:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:50.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:12:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:51 np0005466030 nova_compute[230518]: 2025-10-02 12:12:51.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:12:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:52.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:12:52 np0005466030 nova_compute[230518]: 2025-10-02 12:12:52.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:52.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:54.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:12:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:54.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:12:54 np0005466030 podman[236486]: 2025-10-02 12:12:54.814777241 +0000 UTC m=+0.069019113 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:12:56 np0005466030 nova_compute[230518]: 2025-10-02 12:12:56.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:56.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:57 np0005466030 nova_compute[230518]: 2025-10-02 12:12:57.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:12:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:12:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:58.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:12:58 np0005466030 podman[236506]: 2025-10-02 12:12:58.789137112 +0000 UTC m=+0.044080268 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:12:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:12:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:12:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:58.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:13:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:00.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:13:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:13:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:00.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.150 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.151 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.172 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.254 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.255 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.264 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.265 2 INFO nova.compute.claims [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.405 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:01 np0005466030 ovn_controller[129257]: 2025-10-02T12:13:01Z|00055|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:13:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:13:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3565044961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.841 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.846 2 DEBUG nova.compute.provider_tree [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.864 2 DEBUG nova.scheduler.client.report [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.891 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.892 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.944 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.945 2 DEBUG nova.network.neutron [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.964 2 INFO nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:13:01 np0005466030 nova_compute[230518]: 2025-10-02 12:13:01.983 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.027 2 DEBUG oslo_concurrency.processutils [None req-2402c7da-5751-417f-a867-6906b50f0e15 8153b3e00d0b451f90664278a2d2bf88 7aa3898a9d4b490d95d0c8bbd7e4127c - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.053 2 DEBUG oslo_concurrency.processutils [None req-2402c7da-5751-417f-a867-6906b50f0e15 8153b3e00d0b451f90664278a2d2bf88 7aa3898a9d4b490d95d0c8bbd7e4127c - - default default] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.089 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.090 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.091 2 INFO nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Creating image(s)#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.124 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.151 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.174 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.177 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.230 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.232 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.232 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.233 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.255 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.259 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f85aa55e-c534-4270-b8bb-d25f8026084c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:13:02 np0005466030 nova_compute[230518]: 2025-10-02 12:13:02.278 2 DEBUG nova.policy [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a80f833255046e7b62d34c1c6066073', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39ca581fbb054c959d26096ca39fef05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:15:00 np0005466030 rsyslogd[1006]: imjournal: 2349 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  2 08:15:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:15:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:15:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:00.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:01 np0005466030 nova_compute[230518]: 2025-10-02 12:15:01.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:01 np0005466030 podman[238727]: 2025-10-02 12:15:01.798257256 +0000 UTC m=+0.055604950 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.232 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.232 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.232 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.261 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.262 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.262 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.262 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.263 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3998498089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.708 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.917 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.918 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.921 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.922 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.926 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:02 np0005466030 nova_compute[230518]: 2025-10-02 12:15:02.926 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:02.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.124 2 WARNING nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.126 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4333MB free_disk=20.763572692871094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.126 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.127 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.198 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Migration for instance b8f8f97e-2823-451c-ab36-7f94ade8be46 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.227 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.272 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Instance f85aa55e-c534-4270-b8bb-d25f8026084c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.273 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Instance 2b86a484-6fc6-4efa-983f-fb93053b0874 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.273 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.273 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Migration 7b3d6a5a-de37-42b3-bb4b-3c64a2479aa1 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.274 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.274 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.305 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.328 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.329 2 DEBUG nova.compute.provider_tree [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.354 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.376 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.478 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3902082590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.957 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.963 2 DEBUG nova.compute.provider_tree [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:03 np0005466030 nova_compute[230518]: 2025-10-02 12:15:03.997 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:04 np0005466030 nova_compute[230518]: 2025-10-02 12:15:04.058 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:15:04 np0005466030 nova_compute[230518]: 2025-10-02 12:15:04.059 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:04 np0005466030 nova_compute[230518]: 2025-10-02 12:15:04.067 2 INFO nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:15:04 np0005466030 nova_compute[230518]: 2025-10-02 12:15:04.183 2 INFO nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Deleted allocation for migration 7b3d6a5a-de37-42b3-bb4b-3c64a2479aa1#033[00m
Oct  2 08:15:04 np0005466030 nova_compute[230518]: 2025-10-02 12:15:04.184 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:15:04 np0005466030 nova_compute[230518]: 2025-10-02 12:15:04.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:04.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:04.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:15:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:15:05 np0005466030 nova_compute[230518]: 2025-10-02 12:15:05.594 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Creating tmpfile /var/lib/nova/instances/tmpvz1p5b8e to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct  2 08:15:05 np0005466030 nova_compute[230518]: 2025-10-02 12:15:05.595 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.468 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407291.0397005, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.468 2 INFO nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.521 2 DEBUG nova.compute.manager [None req-e439849c-3ec0-4c34-8c0d-d8e694b4ee17 - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.649 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.649 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.649 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.650 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.650 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.651 2 INFO nova.compute.manager [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Terminating instance#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.652 2 DEBUG nova.compute.manager [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:15:06 np0005466030 kernel: tap760df1d8-a2 (unregistering): left promiscuous mode
Oct  2 08:15:06 np0005466030 NetworkManager[44960]: <info>  [1759407306.7289] device (tap760df1d8-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:06Z|00084|binding|INFO|Releasing lport 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a from this chassis (sb_readonly=0)
Oct  2 08:15:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:06Z|00085|binding|INFO|Setting lport 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a down in Southbound
Oct  2 08:15:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:06Z|00086|binding|INFO|Removing iface tap760df1d8-a2 ovn-installed in OVS
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.763 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:77 10.100.0.8'], port_security=['fa:16:3e:6f:96:77 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f85aa55e-c534-4270-b8bb-d25f8026084c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.764 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis#033[00m
Oct  2 08:15:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.767 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85ed78eb-4003-42a7-9312-f47c5830131f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:15:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.768 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a402bde-0c4f-44a6-ad84-d04dd1fd83a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.768 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f namespace which is not needed anymore#033[00m
Oct  2 08:15:06 np0005466030 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  2 08:15:06 np0005466030 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 17.147s CPU time.
Oct  2 08:15:06 np0005466030 systemd-machined[188247]: Machine qemu-5-instance-0000000c terminated.
Oct  2 08:15:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:06.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.891 2 INFO nova.virt.libvirt.driver [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Instance destroyed successfully.#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.891 2 DEBUG nova.objects.instance [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'resources' on Instance uuid f85aa55e-c534-4270-b8bb-d25f8026084c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:06 np0005466030 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [NOTICE]   (236979) : haproxy version is 2.8.14-c23fe91
Oct  2 08:15:06 np0005466030 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [NOTICE]   (236979) : path to executable is /usr/sbin/haproxy
Oct  2 08:15:06 np0005466030 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [WARNING]  (236979) : Exiting Master process...
Oct  2 08:15:06 np0005466030 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [ALERT]    (236979) : Current worker (236981) exited with code 143 (Terminated)
Oct  2 08:15:06 np0005466030 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [WARNING]  (236979) : All workers exited. Exiting... (0)
Oct  2 08:15:06 np0005466030 systemd[1]: libpod-35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb.scope: Deactivated successfully.
Oct  2 08:15:06 np0005466030 podman[239064]: 2025-10-02 12:15:06.917850742 +0000 UTC m=+0.050134409 container died 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:15:06 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb-userdata-shm.mount: Deactivated successfully.
Oct  2 08:15:06 np0005466030 systemd[1]: var-lib-containers-storage-overlay-602c0cae69e79696e14e2b1741a65067c00fef81481eae72b85f399a9248bd39-merged.mount: Deactivated successfully.
Oct  2 08:15:06 np0005466030 podman[239064]: 2025-10-02 12:15:06.955209376 +0000 UTC m=+0.087493053 container cleanup 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:15:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:06.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:06 np0005466030 systemd[1]: libpod-conmon-35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb.scope: Deactivated successfully.
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.987 2 DEBUG nova.virt.libvirt.vif [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1669259740',display_name='tempest-ServersAdminTestJSON-server-1669259740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1669259740',id=12,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-xl0a3qnb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:11Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=f85aa55e-c534-4270-b8bb-d25f8026084c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.987 2 DEBUG nova.network.os_vif_util [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.988 2 DEBUG nova.network.os_vif_util [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.989 2 DEBUG os_vif [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap760df1d8-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:15:06 np0005466030 nova_compute[230518]: 2025-10-02 12:15:06.998 2 INFO os_vif [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2')#033[00m
Oct  2 08:15:07 np0005466030 podman[239105]: 2025-10-02 12:15:07.020352635 +0000 UTC m=+0.045213763 container remove 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.026 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[86464e80-859c-436c-a413-9e38b836d98f]: (4, ('Thu Oct  2 12:15:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f (35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb)\n35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb\nThu Oct  2 12:15:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f (35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb)\n35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.028 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1de9e52e-a8f3-497c-be27-d7f24702fbed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.028 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:07 np0005466030 kernel: tap85ed78eb-40: left promiscuous mode
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.046 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e83db9d4-183d-4e62-847e-d7b6691276b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.080 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cbe24a-140e-4e29-8fb8-8ff746bdc519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffeab7e1-2050-4281-acc7-7d4ab913dc6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.096 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[590c1a92-d089-4921-9872-dc4c7e7e66c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505390, 'reachable_time': 33646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239138, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.098 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:15:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.098 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd9c74e-2598-4969-9f98-1cc67c42b6ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:07 np0005466030 systemd[1]: run-netns-ovnmeta\x2d85ed78eb\x2d4003\x2d42a7\x2d9312\x2df47c5830131f.mount: Deactivated successfully.
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.303 2 DEBUG nova.compute.manager [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-unplugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.304 2 DEBUG oslo_concurrency.lockutils [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.304 2 DEBUG oslo_concurrency.lockutils [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.304 2 DEBUG oslo_concurrency.lockutils [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.305 2 DEBUG nova.compute.manager [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] No waiting events found dispatching network-vif-unplugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.305 2 DEBUG nova.compute.manager [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-unplugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.440 2 INFO nova.virt.libvirt.driver [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Deleting instance files /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c_del#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.441 2 INFO nova.virt.libvirt.driver [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Deletion of /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c_del complete#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.468 2 DEBUG nova.compute.manager [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.662 2 INFO nova.compute.manager [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.663 2 DEBUG oslo.service.loopingcall [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.663 2 DEBUG nova.compute.manager [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.663 2 DEBUG nova.network.neutron [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.687 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.688 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.711 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.719 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.746 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.747 2 INFO nova.compute.claims [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.747 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.765 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.766 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.766 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.769 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.825 2 INFO nova.compute.resource_tracker [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating resource usage from migration f91708f3-2f55-4a30-9404-01310d275f98#033[00m
Oct  2 08:15:07 np0005466030 nova_compute[230518]: 2025-10-02 12:15:07.826 2 DEBUG nova.compute.resource_tracker [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Starting to track incoming migration f91708f3-2f55-4a30-9404-01310d275f98 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:15:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:08 np0005466030 nova_compute[230518]: 2025-10-02 12:15:08.090 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2752457260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:08 np0005466030 nova_compute[230518]: 2025-10-02 12:15:08.538 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:08 np0005466030 nova_compute[230518]: 2025-10-02 12:15:08.545 2 DEBUG nova.compute.provider_tree [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:08 np0005466030 nova_compute[230518]: 2025-10-02 12:15:08.578 2 DEBUG nova.scheduler.client.report [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:08 np0005466030 nova_compute[230518]: 2025-10-02 12:15:08.612 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:08 np0005466030 nova_compute[230518]: 2025-10-02 12:15:08.612 2 INFO nova.compute.manager [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Migrating#033[00m
Oct  2 08:15:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:08.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:08.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.089 2 DEBUG nova.network.neutron [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.118 2 INFO nova.compute.manager [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Took 1.45 seconds to deallocate network for instance.#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.193 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.194 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.371 2 DEBUG oslo_concurrency.processutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.458 2 DEBUG nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.459 2 DEBUG oslo_concurrency.lockutils [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.459 2 DEBUG oslo_concurrency.lockutils [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.459 2 DEBUG oslo_concurrency.lockutils [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.460 2 DEBUG nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] No waiting events found dispatching network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.460 2 WARNING nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received unexpected event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.460 2 DEBUG nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-deleted-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1446795426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.903 2 DEBUG oslo_concurrency.processutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:09 np0005466030 nova_compute[230518]: 2025-10-02 12:15:09.912 2 DEBUG nova.compute.provider_tree [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.017 2 DEBUG nova.scheduler.client.report [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.063 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:10 np0005466030 systemd-logind[795]: New session 56 of user nova.
Oct  2 08:15:10 np0005466030 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.141 2 INFO nova.scheduler.client.report [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Deleted allocations for instance f85aa55e-c534-4270-b8bb-d25f8026084c#033[00m
Oct  2 08:15:10 np0005466030 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:15:10 np0005466030 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:15:10 np0005466030 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.313 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:10 np0005466030 systemd[239188]: Queued start job for default target Main User Target.
Oct  2 08:15:10 np0005466030 systemd[239188]: Created slice User Application Slice.
Oct  2 08:15:10 np0005466030 systemd[239188]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:15:10 np0005466030 systemd[239188]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:15:10 np0005466030 systemd[239188]: Reached target Paths.
Oct  2 08:15:10 np0005466030 systemd[239188]: Reached target Timers.
Oct  2 08:15:10 np0005466030 systemd[239188]: Starting D-Bus User Message Bus Socket...
Oct  2 08:15:10 np0005466030 systemd[239188]: Starting Create User's Volatile Files and Directories...
Oct  2 08:15:10 np0005466030 systemd[239188]: Finished Create User's Volatile Files and Directories.
Oct  2 08:15:10 np0005466030 systemd[239188]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:15:10 np0005466030 systemd[239188]: Reached target Sockets.
Oct  2 08:15:10 np0005466030 systemd[239188]: Reached target Basic System.
Oct  2 08:15:10 np0005466030 systemd[239188]: Reached target Main User Target.
Oct  2 08:15:10 np0005466030 systemd[239188]: Startup finished in 150ms.
Oct  2 08:15:10 np0005466030 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:15:10 np0005466030 systemd[1]: Started Session 56 of User nova.
Oct  2 08:15:10 np0005466030 systemd[1]: session-56.scope: Deactivated successfully.
Oct  2 08:15:10 np0005466030 systemd-logind[795]: Session 56 logged out. Waiting for processes to exit.
Oct  2 08:15:10 np0005466030 systemd-logind[795]: Removed session 56.
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.538 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.561 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.562 2 DEBUG os_brick.utils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.563 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.572 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.573 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbd5fdd-b87e-4de3-b5e4-2a2806577c7f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.573 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.581 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.582 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d6dd8634-71e9-4bf7-8caa-b578e6df019e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.582 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.591 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.591 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[9334ef9a-e38a-43ed-a176-3433146fb773]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.592 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c55d6c-4c12-4b02-8f10-40320155ce81]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.593 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:10 np0005466030 systemd-logind[795]: New session 58 of user nova.
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.612 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:10 np0005466030 systemd[1]: Started Session 58 of User nova.
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.614 2 DEBUG os_brick.initiator.connectors.lightos [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.614 2 DEBUG os_brick.initiator.connectors.lightos [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.614 2 DEBUG os_brick.initiator.connectors.lightos [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:15:10 np0005466030 nova_compute[230518]: 2025-10-02 12:15:10.615 2 DEBUG os_brick.utils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:15:10 np0005466030 systemd-logind[795]: Session 58 logged out. Waiting for processes to exit.
Oct  2 08:15:10 np0005466030 systemd[1]: session-58.scope: Deactivated successfully.
Oct  2 08:15:10 np0005466030 systemd-logind[795]: Removed session 58.
Oct  2 08:15:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:10.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:10.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:11 np0005466030 nova_compute[230518]: 2025-10-02 12:15:11.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/879596293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:11 np0005466030 nova_compute[230518]: 2025-10-02 12:15:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.311 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='8041298c-4154-45fe-81e2-9dfa6deeae22'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.312 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Creating instance directory: /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.312 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Ensure instance console log exists: /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.313 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.318 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.320 2 DEBUG nova.virt.libvirt.vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:01Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.320 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.321 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.321 2 DEBUG os_vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647b79a6-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap647b79a6-6c, col_values=(('external_ids', {'iface-id': '647b79a6-6cf5-4d28-afd1-9e21f2a56e32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:be:58', 'vm-uuid': 'b8f8f97e-2823-451c-ab36-7f94ade8be46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:12 np0005466030 NetworkManager[44960]: <info>  [1759407312.3309] manager: (tap647b79a6-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.338 2 INFO os_vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c')#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.341 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct  2 08:15:12 np0005466030 nova_compute[230518]: 2025-10-02 12:15:12.341 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='8041298c-4154-45fe-81e2-9dfa6deeae22'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct  2 08:15:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:12.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:12.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:14 np0005466030 nova_compute[230518]: 2025-10-02 12:15:14.333 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct  2 08:15:14 np0005466030 nova_compute[230518]: 2025-10-02 12:15:14.553 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='8041298c-4154-45fe-81e2-9dfa6deeae22'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct  2 08:15:14 np0005466030 kernel: tap647b79a6-6c: entered promiscuous mode
Oct  2 08:15:14 np0005466030 NetworkManager[44960]: <info>  [1759407314.8168] manager: (tap647b79a6-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct  2 08:15:14 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:14Z|00087|binding|INFO|Claiming lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for this additional chassis.
Oct  2 08:15:14 np0005466030 nova_compute[230518]: 2025-10-02 12:15:14.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:14 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:14Z|00088|binding|INFO|647b79a6-6cf5-4d28-afd1-9e21f2a56e32: Claiming fa:16:3e:b9:be:58 10.100.0.12
Oct  2 08:15:14 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:14Z|00089|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 ovn-installed in OVS
Oct  2 08:15:14 np0005466030 nova_compute[230518]: 2025-10-02 12:15:14.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:14 np0005466030 nova_compute[230518]: 2025-10-02 12:15:14.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:14 np0005466030 nova_compute[230518]: 2025-10-02 12:15:14.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:14 np0005466030 systemd-udevd[239284]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:15:14 np0005466030 systemd-machined[188247]: New machine qemu-10-instance-00000014.
Oct  2 08:15:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:14.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:14 np0005466030 NetworkManager[44960]: <info>  [1759407314.8571] device (tap647b79a6-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:15:14 np0005466030 NetworkManager[44960]: <info>  [1759407314.8579] device (tap647b79a6-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:15:14 np0005466030 systemd[1]: Started Virtual Machine qemu-10-instance-00000014.
Oct  2 08:15:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:14.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:15:15 np0005466030 nova_compute[230518]: 2025-10-02 12:15:15.622 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:15 np0005466030 nova_compute[230518]: 2025-10-02 12:15:15.623 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:15 np0005466030 nova_compute[230518]: 2025-10-02 12:15:15.656 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:15:15 np0005466030 nova_compute[230518]: 2025-10-02 12:15:15.749 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:15 np0005466030 nova_compute[230518]: 2025-10-02 12:15:15.750 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:15 np0005466030 nova_compute[230518]: 2025-10-02 12:15:15.758 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:15:15 np0005466030 nova_compute[230518]: 2025-10-02 12:15:15.759 2 INFO nova.compute.claims [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.021 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:16 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.419 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407316.4184062, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.421 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Started (Lifecycle Event)#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.445 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3930749028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.534 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.541 2 DEBUG nova.compute.provider_tree [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.559 2 DEBUG nova.scheduler.client.report [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.591 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.593 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.653 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.654 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.677 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.728 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:15:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.895 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407316.8950229, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.895 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.958 2 DEBUG nova.policy [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79b88925d1704f5c9b3d2114c1a9ae4f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd92e60d304e64805972937813fc99606', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:15:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.982 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:16 np0005466030 nova_compute[230518]: 2025-10-02 12:15:16.985 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.043 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.044 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.044 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Creating image(s)#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.067 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.088 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.110 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.113 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.129 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.168 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.169 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.170 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.170 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.190 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.194 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 01eee71c-078c-41f4-a1c1-4591cab7195e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.561 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 01eee71c-078c-41f4-a1c1-4591cab7195e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.632 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] resizing rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.759 2 DEBUG nova.objects.instance [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lazy-loading 'migration_context' on Instance uuid 01eee71c-078c-41f4-a1c1-4591cab7195e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.779 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.780 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Ensure instance console log exists: /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.780 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.781 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:17 np0005466030 nova_compute[230518]: 2025-10-02 12:15:17.781 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:18 np0005466030 nova_compute[230518]: 2025-10-02 12:15:18.398 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Successfully created port: 0a7827d1-d2e0-4330-b738-ee929dc7af48 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:15:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:18Z|00090|binding|INFO|Claiming lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for this chassis.
Oct  2 08:15:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:18Z|00091|binding|INFO|647b79a6-6cf5-4d28-afd1-9e21f2a56e32: Claiming fa:16:3e:b9:be:58 10.100.0.12
Oct  2 08:15:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:18Z|00092|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 up in Southbound
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.830 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:be:58 10.100.0.12'], port_security=['fa:16:3e:b9:be:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '20', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=647b79a6-6cf5-4d28-afd1-9e21f2a56e32) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.832 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 bound to our chassis#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.834 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.849 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c814f42b-71a5-4270-9bde-d07ab18c3d27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.877 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[71075d35-3b53-4778-a87d-6bcfaaff93cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.880 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[54d59eba-9833-4205-a582-d9c693e0d02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.909 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48469a6b-956c-4e27-889a-c6a91ad24af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.927 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[532e262e-414c-41d9-9b87-e6343a625b8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 27, 'tx_packets': 9, 'rx_bytes': 1630, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 27, 'tx_packets': 9, 'rx_bytes': 1630, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239528, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.943 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43e9868b-b3e5-4036-bc31-34265ea4bbc6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508669, 'tstamp': 508669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239529, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508672, 'tstamp': 508672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239529, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.944 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:18 np0005466030 nova_compute[230518]: 2025-10-02 12:15:18.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:18 np0005466030 nova_compute[230518]: 2025-10-02 12:15:18.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.947 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.948 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.948 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.948 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:18.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.386 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Successfully updated port: 0a7827d1-d2e0-4330-b738-ee929dc7af48 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.477 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.477 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquired lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.477 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.569 2 DEBUG nova.compute.manager [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-changed-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.570 2 DEBUG nova.compute.manager [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Refreshing instance network info cache due to event network-changed-0a7827d1-d2e0-4330-b738-ee929dc7af48. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.570 2 DEBUG oslo_concurrency.lockutils [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.596 2 INFO nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Post operation of migration started#033[00m
Oct  2 08:15:19 np0005466030 nova_compute[230518]: 2025-10-02 12:15:19.757 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:15:19 np0005466030 podman[239531]: 2025-10-02 12:15:19.812364528 +0000 UTC m=+0.055236008 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:15:19 np0005466030 podman[239530]: 2025-10-02 12:15:19.870401934 +0000 UTC m=+0.113989937 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 08:15:20 np0005466030 nova_compute[230518]: 2025-10-02 12:15:20.320 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:20 np0005466030 nova_compute[230518]: 2025-10-02 12:15:20.320 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:20 np0005466030 nova_compute[230518]: 2025-10-02 12:15:20.320 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:20Z|00093|binding|INFO|Releasing lport 02fa40d7-59fd-4885-996d-218aed489cb1 from this chassis (sb_readonly=0)
Oct  2 08:15:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:20Z|00094|binding|INFO|Releasing lport 2278bdaf-c37b-4127-83d4-ca11f07feaa5 from this chassis (sb_readonly=0)
Oct  2 08:15:20 np0005466030 nova_compute[230518]: 2025-10-02 12:15:20.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:20 np0005466030 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:15:20 np0005466030 systemd[239188]: Activating special unit Exit the Session...
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped target Main User Target.
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped target Basic System.
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped target Paths.
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped target Sockets.
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped target Timers.
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:15:20 np0005466030 systemd[239188]: Closed D-Bus User Message Bus Socket.
Oct  2 08:15:20 np0005466030 systemd[239188]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:15:20 np0005466030 systemd[239188]: Removed slice User Application Slice.
Oct  2 08:15:20 np0005466030 systemd[239188]: Reached target Shutdown.
Oct  2 08:15:20 np0005466030 systemd[239188]: Finished Exit the Session.
Oct  2 08:15:20 np0005466030 systemd[239188]: Reached target Exit the Session.
Oct  2 08:15:20 np0005466030 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:15:20 np0005466030 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:15:20 np0005466030 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:15:20 np0005466030 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:15:20 np0005466030 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:15:20 np0005466030 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:15:20 np0005466030 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:15:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:20.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.114 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.170 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Releasing lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.170 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance network_info: |[{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.171 2 DEBUG oslo_concurrency.lockutils [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.171 2 DEBUG nova.network.neutron [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Refreshing network info cache for port 0a7827d1-d2e0-4330-b738-ee929dc7af48 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.175 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start _get_guest_xml network_info=[{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.187 2 WARNING nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.192 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.193 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.200 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.200 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.202 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.202 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.202 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.204 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.204 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.204 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.207 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.476 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.532 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.563 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.564 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.564 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.568 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct  2 08:15:21 np0005466030 virtqemud[230067]: Domain id=10 name='instance-00000014' uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46 is tainted: custom-monitor
Oct  2 08:15:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2104195057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.663 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.686 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.727 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.888 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407306.887273, f85aa55e-c534-4270-b8bb-d25f8026084c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:21 np0005466030 nova_compute[230518]: 2025-10-02 12:15:21.889 2 INFO nova.compute.manager [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.023 2 DEBUG nova.compute.manager [None req-1d4bc170-ba4c-481b-8ccf-8faa773353a2 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1515999479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.202 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.204 2 DEBUG nova.virt.libvirt.vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:15:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1959096416',display_name='tempest-ImagesOneServerTestJSON-server-1959096416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1959096416',id=22,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d92e60d304e64805972937813fc99606',ramdisk_id='',reservation_id='r-nww065mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-572210404',owner_user_name='tempest-ImagesOneServerTestJSON-572210404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:15:16Z,user_data=None,user_id='79b88925d1704f5c9b3d2114c1a9ae4f',uuid=01eee71c-078c-41f4-a1c1-4591cab7195e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.204 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converting VIF {"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.205 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.206 2 DEBUG nova.objects.instance [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01eee71c-078c-41f4-a1c1-4591cab7195e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.258 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <uuid>01eee71c-078c-41f4-a1c1-4591cab7195e</uuid>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <name>instance-00000016</name>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <nova:name>tempest-ImagesOneServerTestJSON-server-1959096416</nova:name>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:15:21</nova:creationTime>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:user uuid="79b88925d1704f5c9b3d2114c1a9ae4f">tempest-ImagesOneServerTestJSON-572210404-project-member</nova:user>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:project uuid="d92e60d304e64805972937813fc99606">tempest-ImagesOneServerTestJSON-572210404</nova:project>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <nova:port uuid="0a7827d1-d2e0-4330-b738-ee929dc7af48">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <entry name="serial">01eee71c-078c-41f4-a1c1-4591cab7195e</entry>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <entry name="uuid">01eee71c-078c-41f4-a1c1-4591cab7195e</entry>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:cc:6f:ab"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <target dev="tap0a7827d1-d2"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/console.log" append="off"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:15:22 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:15:22 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:15:22 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:15:22 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.260 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Preparing to wait for external event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.260 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.261 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.261 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.262 2 DEBUG nova.virt.libvirt.vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:15:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1959096416',display_name='tempest-ImagesOneServerTestJSON-server-1959096416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1959096416',id=22,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d92e60d304e64805972937813fc99606',ramdisk_id='',reservation_id='r-nww065mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-572210404',owner_user_name='tempest-ImagesOneServerTestJSON-572210404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:15:16Z,user_data=None,user_id='79b88925d1704f5c9b3d2114c1a9ae4f',uuid=01eee71c-078c-41f4-a1c1-4591cab7195e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.262 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converting VIF {"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.263 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.263 2 DEBUG os_vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a7827d1-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a7827d1-d2, col_values=(('external_ids', {'iface-id': '0a7827d1-d2e0-4330-b738-ee929dc7af48', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:6f:ab', 'vm-uuid': '01eee71c-078c-41f4-a1c1-4591cab7195e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:22 np0005466030 NetworkManager[44960]: <info>  [1759407322.2732] manager: (tap0a7827d1-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.281 2 INFO os_vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2')#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.326 2 DEBUG nova.network.neutron [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updated VIF entry in instance network info cache for port 0a7827d1-d2e0-4330-b738-ee929dc7af48. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.326 2 DEBUG nova.network.neutron [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.405 2 DEBUG oslo_concurrency.lockutils [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.409 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.410 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.410 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No VIF found with MAC fa:16:3e:cc:6f:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.411 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Using config drive#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.437 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.575 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.744 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Creating config drive at /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.749 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_r609qv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.875 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_r609qv" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.901 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:22 np0005466030 nova_compute[230518]: 2025-10-02 12:15:22.904 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.221 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.222 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deleting local config drive /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:15:23 np0005466030 kernel: tap0a7827d1-d2: entered promiscuous mode
Oct  2 08:15:23 np0005466030 NetworkManager[44960]: <info>  [1759407323.2625] manager: (tap0a7827d1-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct  2 08:15:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:23Z|00095|binding|INFO|Claiming lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 for this chassis.
Oct  2 08:15:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:23Z|00096|binding|INFO|0a7827d1-d2e0-4330-b738-ee929dc7af48: Claiming fa:16:3e:cc:6f:ab 10.100.0.13
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:23 np0005466030 systemd-udevd[239711]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:15:23 np0005466030 systemd-machined[188247]: New machine qemu-11-instance-00000016.
Oct  2 08:15:23 np0005466030 NetworkManager[44960]: <info>  [1759407323.3008] device (tap0a7827d1-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:15:23 np0005466030 NetworkManager[44960]: <info>  [1759407323.3016] device (tap0a7827d1-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:15:23 np0005466030 systemd[1]: Started Virtual Machine qemu-11-instance-00000016.
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.337 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6f:ab 10.100.0.13'], port_security=['fa:16:3e:cc:6f:ab 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '01eee71c-078c-41f4-a1c1-4591cab7195e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd92e60d304e64805972937813fc99606', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13422694-ff96-4d03-9ea0-adedb130ec76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cf5d7d6-9d03-4d57-a5e5-97ce6dc98b2e, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=0a7827d1-d2e0-4330-b738-ee929dc7af48) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.338 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 0a7827d1-d2e0-4330-b738-ee929dc7af48 in datapath 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 bound to our chassis#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.340 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5#033[00m
Oct  2 08:15:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:23Z|00097|binding|INFO|Setting lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 ovn-installed in OVS
Oct  2 08:15:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:23Z|00098|binding|INFO|Setting lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 up in Southbound
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43243dc9-a278-4652-9e71-92c144ee5f1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.350 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap377fcfd9-a1 in ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.352 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap377fcfd9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.352 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ad762928-8d58-4630-a76c-29138b60888a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.353 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7376425c-53c4-41d6-891b-ebc0bf1d2902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.364 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2d64b8-51e5-49ba-b89d-d28876879eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.377 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f2be35a2-081e-4415-91dc-262ec9b56108]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.403 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c459ab64-fbc8-43e9-afe6-c9a6029b5621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 systemd-udevd[239714]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:15:23 np0005466030 NetworkManager[44960]: <info>  [1759407323.4086] manager: (tap377fcfd9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.408 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e88b9943-603f-4833-916a-9b2b480db174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.435 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d399830c-d328-411d-ad3d-da8a4ef816dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.438 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7706ec-bc5a-4f32-b036-a65849859779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 NetworkManager[44960]: <info>  [1759407323.4578] device (tap377fcfd9-a0): carrier: link connected
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.464 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6d57004b-72da-4ea6-a97a-cb2e0f917bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.479 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1afa4bd1-3b4e-4e96-9739-59bc77ffbd72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap377fcfd9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:25:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518701, 'reachable_time': 29948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239745, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.495 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8c68aec0-86be-4e87-8ae6-d6db8426bffa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:2505'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518701, 'tstamp': 518701}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239746, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.513 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[635e9745-f51d-4fd3-86a5-a4b507880c69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap377fcfd9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:25:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518701, 'reachable_time': 29948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239747, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.546 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fa71cf74-964a-4113-895a-ff73f9124856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.580 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.586 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.610 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a19f493-1591-4713-80a4-31dad238f938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.612 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap377fcfd9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.612 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.613 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap377fcfd9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:23 np0005466030 NetworkManager[44960]: <info>  [1759407323.6155] manager: (tap377fcfd9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct  2 08:15:23 np0005466030 kernel: tap377fcfd9-a0: entered promiscuous mode
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.616 2 DEBUG nova.objects.instance [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.619 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap377fcfd9-a0, col_values=(('external_ids', {'iface-id': '727141f8-bed3-42ac-abe8-be7b66cbedbb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:23Z|00099|binding|INFO|Releasing lport 727141f8-bed3-42ac-abe8-be7b66cbedbb from this chassis (sb_readonly=0)
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.636 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.637 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95156b12-1172-4150-8046-3a4cf6db7b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.638 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.pid.haproxy
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:15:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.640 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'env', 'PROCESS_TAG=haproxy-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.871 2 DEBUG nova.compute.manager [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.871 2 DEBUG oslo_concurrency.lockutils [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.872 2 DEBUG oslo_concurrency.lockutils [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.872 2 DEBUG oslo_concurrency.lockutils [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:23 np0005466030 nova_compute[230518]: 2025-10-02 12:15:23.873 2 DEBUG nova.compute.manager [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Processing event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:15:24 np0005466030 podman[239815]: 2025-10-02 12:15:24.044142821 +0000 UTC m=+0.054057881 container create c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:15:24 np0005466030 systemd[1]: Started libpod-conmon-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe.scope.
Oct  2 08:15:24 np0005466030 podman[239815]: 2025-10-02 12:15:24.013756515 +0000 UTC m=+0.023671615 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:15:24 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:15:24 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71439b31039a617ef0686867304ef27ca8200532d38131be38c55e1cda6f86ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:15:24 np0005466030 podman[239815]: 2025-10-02 12:15:24.14078435 +0000 UTC m=+0.150699420 container init c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:15:24 np0005466030 podman[239815]: 2025-10-02 12:15:24.146202341 +0000 UTC m=+0.156117411 container start c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:15:24 np0005466030 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : New worker (239842) forked
Oct  2 08:15:24 np0005466030 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : Loading success.
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.463 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.464 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407324.463521, 01eee71c-078c-41f4-a1c1-4591cab7195e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.465 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.468 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.471 2 INFO nova.virt.libvirt.driver [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance spawned successfully.#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.472 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.590 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.593 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.621 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.622 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.622 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.623 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.623 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.624 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.747 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.748 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407324.4649415, 01eee71c-078c-41f4-a1c1-4591cab7195e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.748 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.779 2 INFO nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 7.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.780 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.800 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.802 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407324.4665978, 01eee71c-078c-41f4-a1c1-4591cab7195e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.803 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.862 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.864 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:24 np0005466030 nova_compute[230518]: 2025-10-02 12:15:24.974 2 INFO nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 9.26 seconds to build instance.#033[00m
Oct  2 08:15:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:24.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:25 np0005466030 nova_compute[230518]: 2025-10-02 12:15:25.031 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:25 np0005466030 nova_compute[230518]: 2025-10-02 12:15:25.537 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:25 np0005466030 nova_compute[230518]: 2025-10-02 12:15:25.537 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:25 np0005466030 nova_compute[230518]: 2025-10-02 12:15:25.538 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:25 np0005466030 nova_compute[230518]: 2025-10-02 12:15:25.839 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:15:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:25.915 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:25.918 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.225 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.451 2 DEBUG nova.compute.manager [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG oslo_concurrency.lockutils [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG oslo_concurrency.lockutils [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG oslo_concurrency.lockutils [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG nova.compute.manager [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] No waiting events found dispatching network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.453 2 WARNING nova.compute.manager [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received unexpected event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.458 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.706 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.707 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.707 2 INFO nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Creating image(s)#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.755 2 DEBUG nova.storage.rbd_utils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] creating snapshot(nova-resize) on rbd image(bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:15:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:26 np0005466030 nova_compute[230518]: 2025-10-02 12:15:26.948 2 DEBUG nova.compute.manager [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.001 2 INFO nova.compute.manager [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] instance snapshotting#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.251 2 INFO nova.virt.libvirt.driver [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Beginning live snapshot process#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.277 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.278 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.278 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.278 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.279 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.280 2 INFO nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Terminating instance#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.281 2 DEBUG nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:15:27 np0005466030 kernel: tap647b79a6-6c (unregistering): left promiscuous mode
Oct  2 08:15:27 np0005466030 NetworkManager[44960]: <info>  [1759407327.3317] device (tap647b79a6-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:15:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:27Z|00100|binding|INFO|Releasing lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 from this chassis (sb_readonly=0)
Oct  2 08:15:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:27Z|00101|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 down in Southbound
Oct  2 08:15:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:27Z|00102|binding|INFO|Removing iface tap647b79a6-6c ovn-installed in OVS
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.392 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:be:58 10.100.0.12'], port_security=['fa:16:3e:b9:be:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '22', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=647b79a6-6cf5-4d28-afd1-9e21f2a56e32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.395 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 unbound from our chassis#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.397 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.416 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[131440ef-79a4-4151-80d5-418163cfa2a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:27 np0005466030 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct  2 08:15:27 np0005466030 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Consumed 2.222s CPU time.
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466030 systemd-machined[188247]: Machine qemu-10-instance-00000014 terminated.
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.441 2 DEBUG nova.virt.libvirt.imagebackend [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.446 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c62378e6-bb22-4d96-bf61-e8ef75c4c1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.449 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0264e58a-8cd6-45be-8c3c-a748a30246a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.477 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[36b6c7dd-607f-4b62-bcd5-0014d39a3f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.494 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6d58de-fb92-4344-aa27-b9fe705f3fec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 11, 'rx_bytes': 2260, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 11, 'rx_bytes': 2260, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239931, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.511 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91fdd428-1a8a-4977-baf3-24ba07dfcade]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508669, 'tstamp': 508669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239934, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508672, 'tstamp': 508672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239934, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.517 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.522 2 INFO nova.virt.libvirt.driver [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance destroyed successfully.#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.523 2 DEBUG nova.objects.instance [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lazy-loading 'resources' on Instance uuid b8f8f97e-2823-451c-ab36-7f94ade8be46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.529 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.531 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.539 2 DEBUG nova.virt.libvirt.vif [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:23Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.539 2 DEBUG nova.network.os_vif_util [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.540 2 DEBUG nova.network.os_vif_util [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.541 2 DEBUG os_vif [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647b79a6-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.547 2 INFO os_vif [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c')#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.656 2 DEBUG nova.compute.manager [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.657 2 DEBUG oslo_concurrency.lockutils [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.657 2 DEBUG oslo_concurrency.lockutils [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.657 2 DEBUG oslo_concurrency.lockutils [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.658 2 DEBUG nova.compute.manager [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.658 2 DEBUG nova.compute.manager [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:15:27 np0005466030 nova_compute[230518]: 2025-10-02 12:15:27.683 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(89ce9dda825b4bc98f90246d0c92a59d) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:15:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Oct  2 08:15:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.505 2 INFO nova.virt.libvirt.driver [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deleting instance files /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.506 2 INFO nova.virt.libvirt.driver [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deletion of /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del complete#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.555 2 INFO nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.556 2 DEBUG oslo.service.loopingcall [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.557 2 DEBUG nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.557 2 DEBUG nova.network.neutron [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.722 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.832 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.832 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Ensure instance console log exists: /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.836 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.839 2 WARNING nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.844 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.845 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.847 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.848 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.849 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.849 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.850 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.850 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.850 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.851 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.851 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.851 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.853 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:28 np0005466030 nova_compute[230518]: 2025-10-02 12:15:28.867 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:28.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.217 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] cloning vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk@89ce9dda825b4bc98f90246d0c92a59d to images/9f2e97bd-159f-41e5-875d-f066be38a116 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:15:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2568182976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.308 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.350 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] flattening images/9f2e97bd-159f-41e5-875d-f066be38a116 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.402 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.678 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] removing snapshot(89ce9dda825b4bc98f90246d0c92a59d) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.687 2 DEBUG nova.network.neutron [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.709 2 INFO nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 1.15 seconds to deallocate network for instance.#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.801 2 DEBUG nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.802 2 DEBUG oslo_concurrency.lockutils [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.802 2 DEBUG oslo_concurrency.lockutils [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.802 2 DEBUG oslo_concurrency.lockutils [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.803 2 DEBUG nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.803 2 WARNING nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.803 2 DEBUG nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-deleted-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:29 np0005466030 podman[240150]: 2025-10-02 12:15:29.806954677 +0000 UTC m=+0.063832019 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:15:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/917251262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.870 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.872 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <uuid>bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</uuid>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <name>instance-00000015</name>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <memory>196608</memory>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <nova:name>tempest-MigrationsAdminTest-server-399340879</nova:name>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:15:28</nova:creationTime>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.micro">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <nova:memory>192</nova:memory>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <entry name="serial">bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</entry>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <entry name="uuid">bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</entry>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/console.log" append="off"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:15:29 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:15:29 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:15:29 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:15:29 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.932 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.932 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:29 np0005466030 nova_compute[230518]: 2025-10-02 12:15:29.946 2 INFO nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Using config drive#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.016 2 INFO nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.017 2 DEBUG nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deleting volume: ff92c1da-c1e7-425c-b20d-f332daad4188 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 08:15:30 np0005466030 systemd-machined[188247]: New machine qemu-12-instance-00000015.
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:15:30 np0005466030 systemd[1]: Started Virtual Machine qemu-12-instance-00000015.
Oct  2 08:15:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.228 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(snap) on rbd image(9f2e97bd-159f-41e5-875d-f066be38a116) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.279 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.280 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.285 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.320 2 INFO nova.scheduler.client.report [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Deleted allocations for instance b8f8f97e-2823-451c-ab36-7f94ade8be46#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.375 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:15:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:15:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:15:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:15:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.905 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407330.904805, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.905 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.907 2 DEBUG nova.compute.manager [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.910 2 INFO nova.virt.libvirt.driver [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance running successfully.#033[00m
Oct  2 08:15:30 np0005466030 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.912 2 DEBUG nova.virt.libvirt.guest [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.913 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.944 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:30 np0005466030 nova_compute[230518]: 2025-10-02 12:15:30.946 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.005 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.005 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407330.9067378, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.006 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Started (Lifecycle Event)#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.029 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.049 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.429 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.429 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.430 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.430 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.431 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.432 2 INFO nova.compute.manager [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Terminating instance#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.433 2 DEBUG nova.compute.manager [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:15:31 np0005466030 kernel: tap8879d541-11 (unregistering): left promiscuous mode
Oct  2 08:15:31 np0005466030 NetworkManager[44960]: <info>  [1759407331.4875] device (tap8879d541-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00103|binding|INFO|Releasing lport 8879d541-1199-497a-b096-b45e17e4df04 from this chassis (sb_readonly=0)
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00104|binding|INFO|Setting lport 8879d541-1199-497a-b096-b45e17e4df04 down in Southbound
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00105|binding|INFO|Releasing lport 96e672de-12ad-4022-be24-94113ee6de10 from this chassis (sb_readonly=0)
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00106|binding|INFO|Setting lport 96e672de-12ad-4022-be24-94113ee6de10 down in Southbound
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00107|binding|INFO|Removing iface tap8879d541-11 ovn-installed in OVS
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00108|binding|INFO|Releasing lport 02fa40d7-59fd-4885-996d-218aed489cb1 from this chassis (sb_readonly=0)
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00109|binding|INFO|Releasing lport 2278bdaf-c37b-4127-83d4-ca11f07feaa5 from this chassis (sb_readonly=0)
Oct  2 08:15:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:31Z|00110|binding|INFO|Releasing lport 727141f8-bed3-42ac-abe8-be7b66cbedbb from this chassis (sb_readonly=0)
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.523 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:8f:1e 10.100.0.4'], port_security=['fa:16:3e:d1:8f:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1633959326', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b86a484-6fc6-4efa-983f-fb93053b0874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1633959326', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8879d541-1199-497a-b096-b45e17e4df04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.524 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a4:98 19.80.0.36'], port_security=['fa:16:3e:bb:a4:98 19.80.0.36'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8879d541-1199-497a-b096-b45e17e4df04'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1987210166', 'neutron:cidrs': '19.80.0.36/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1987210166', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ba88d201-1b94-4e72-bbe3-032bdf9cfc2d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=96e672de-12ad-4022-be24-94113ee6de10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.526 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8879d541-1199-497a-b096-b45e17e4df04 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 unbound from our chassis#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.527 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b610572-0903-4bfb-be0b-9848e0af3ae3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee073c81-60b9-440c-9f09-9c6e3eea85c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.530 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 namespace which is not needed anymore#033[00m
Oct  2 08:15:31 np0005466030 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  2 08:15:31 np0005466030 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 7.006s CPU time.
Oct  2 08:15:31 np0005466030 systemd-machined[188247]: Machine qemu-6-instance-0000000d terminated.
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [NOTICE]   (237801) : haproxy version is 2.8.14-c23fe91
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [NOTICE]   (237801) : path to executable is /usr/sbin/haproxy
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [WARNING]  (237801) : Exiting Master process...
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [WARNING]  (237801) : Exiting Master process...
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [ALERT]    (237801) : Current worker (237803) exited with code 143 (Terminated)
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [WARNING]  (237801) : All workers exited. Exiting... (0)
Oct  2 08:15:31 np0005466030 systemd[1]: libpod-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf.scope: Deactivated successfully.
Oct  2 08:15:31 np0005466030 conmon[237797]: conmon 8953e0191abc6f8c7cfb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf.scope/container/memory.events
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.664 2 INFO nova.virt.libvirt.driver [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Instance destroyed successfully.#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.664 2 DEBUG nova.objects.instance [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lazy-loading 'resources' on Instance uuid 2b86a484-6fc6-4efa-983f-fb93053b0874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:31 np0005466030 podman[240288]: 2025-10-02 12:15:31.668146323 +0000 UTC m=+0.055150975 container died 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.686 2 DEBUG nova.virt.libvirt.vif [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-522976997',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-522976997',id=13,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-es3dgd0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:49Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=2b86a484-6fc6-4efa-983f-fb93053b0874,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.687 2 DEBUG nova.network.os_vif_util [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.688 2 DEBUG nova.network.os_vif_util [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.688 2 DEBUG os_vif [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8879d541-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.697 2 INFO os_vif [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11')#033[00m
Oct  2 08:15:31 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf-userdata-shm.mount: Deactivated successfully.
Oct  2 08:15:31 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7b3b2f9ef863e45e32afb62235a57ad96286386eb759fa04a2bc6eb89ccc840d-merged.mount: Deactivated successfully.
Oct  2 08:15:31 np0005466030 podman[240288]: 2025-10-02 12:15:31.717104933 +0000 UTC m=+0.104109585 container cleanup 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 systemd[1]: libpod-conmon-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf.scope: Deactivated successfully.
Oct  2 08:15:31 np0005466030 podman[240344]: 2025-10-02 12:15:31.784503813 +0000 UTC m=+0.043574222 container remove 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.793 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db2c1e25-1d81-4961-b267-4b5699621d30]: (4, ('Thu Oct  2 12:15:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 (8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf)\n8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf\nThu Oct  2 12:15:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 (8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf)\n8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.795 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea8b261-f1b4-4f00-88b2-20f91c484421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.797 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:31 np0005466030 kernel: tap5b610572-00: left promiscuous mode
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.804 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe4b492-f81a-4adc-9e04-ca92ac6fb041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.829 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe27ba3-d685-47fc-8884-e481c1ba19ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c14f3e5c-f480-49c5-8579-259ed8f7de62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[112a14ee-38b3-4b31-af34-ca4910ace156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508651, 'reachable_time': 41789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240362, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.850 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.851 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[360c4824-c969-44b2-be3a-cfb721fcd40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.851 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 96e672de-12ad-4022-be24-94113ee6de10 in datapath bdc26f36-19a2-41f9-8f78-61503fbb20a7 unbound from our chassis#033[00m
Oct  2 08:15:31 np0005466030 systemd[1]: run-netns-ovnmeta\x2d5b610572\x2d0903\x2d4bfb\x2dbe0b\x2d9848e0af3ae3.mount: Deactivated successfully.
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.853 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bdc26f36-19a2-41f9-8f78-61503fbb20a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.854 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[727f8a43-ff2b-402c-82fd-0772770b05a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.855 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 namespace which is not needed anymore#033[00m
Oct  2 08:15:31 np0005466030 podman[240360]: 2025-10-02 12:15:31.902799893 +0000 UTC m=+0.062134465 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.946 2 DEBUG nova.compute.manager [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-vif-unplugged-8879d541-1199-497a-b096-b45e17e4df04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.946 2 DEBUG oslo_concurrency.lockutils [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.951 2 DEBUG oslo_concurrency.lockutils [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.951 2 DEBUG oslo_concurrency.lockutils [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.952 2 DEBUG nova.compute.manager [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] No waiting events found dispatching network-vif-unplugged-8879d541-1199-497a-b096-b45e17e4df04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:31 np0005466030 nova_compute[230518]: 2025-10-02 12:15:31.952 2 DEBUG nova.compute.manager [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-vif-unplugged-8879d541-1199-497a-b096-b45e17e4df04 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [NOTICE]   (237876) : haproxy version is 2.8.14-c23fe91
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [NOTICE]   (237876) : path to executable is /usr/sbin/haproxy
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [WARNING]  (237876) : Exiting Master process...
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [ALERT]    (237876) : Current worker (237878) exited with code 143 (Terminated)
Oct  2 08:15:31 np0005466030 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [WARNING]  (237876) : All workers exited. Exiting... (0)
Oct  2 08:15:31 np0005466030 systemd[1]: libpod-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f.scope: Deactivated successfully.
Oct  2 08:15:31 np0005466030 conmon[237872]: conmon 8c2bd10725ec50506764 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f.scope/container/memory.events
Oct  2 08:15:31 np0005466030 podman[240395]: 2025-10-02 12:15:31.984448791 +0000 UTC m=+0.045819762 container died 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:15:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f-userdata-shm.mount: Deactivated successfully.
Oct  2 08:15:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay-542123bb31b236d9dd2f1f394035551e9fafc6bc648a18f686f01cdda5789143-merged.mount: Deactivated successfully.
Oct  2 08:15:32 np0005466030 podman[240395]: 2025-10-02 12:15:32.026518044 +0000 UTC m=+0.087889005 container cleanup 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:15:32 np0005466030 systemd[1]: libpod-conmon-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f.scope: Deactivated successfully.
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.077 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.078 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:15:32 np0005466030 podman[240425]: 2025-10-02 12:15:32.100069958 +0000 UTC m=+0.044852322 container remove 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.106 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1eecb30e-ce16-4aca-8e1a-d2d5f5f325c7]: (4, ('Thu Oct  2 12:15:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 (8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f)\n8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f\nThu Oct  2 12:15:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 (8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f)\n8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9687ef26-0a4d-4aaa-bfaf-0e567c6626b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.108 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdc26f36-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:32 np0005466030 kernel: tapbdc26f36-10: left promiscuous mode
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.128 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19adf81b-9fb1-453d-9a7d-e7d7c9a3b9f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.150 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c676e5d8-e517-4165-9329-24c6cccb42b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.152 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c376e6-9956-4d40-8d0b-cf78cc0d3988]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.167 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05bfd548-7042-4b86-853d-26bb80ae7be4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508736, 'reachable_time': 37394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240440, 'error': None, 'target': 'ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.170 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.171 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3d14238b-f66c-49cb-b217-bebac10a8ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.269 2 INFO nova.virt.libvirt.driver [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Deleting instance files /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874_del#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.271 2 INFO nova.virt.libvirt.driver [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Deletion of /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874_del complete#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.320 2 INFO nova.compute.manager [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.321 2 DEBUG oslo.service.loopingcall [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.321 2 DEBUG nova.compute.manager [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:15:32 np0005466030 nova_compute[230518]: 2025-10-02 12:15:32.322 2 DEBUG nova.network.neutron [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:15:32 np0005466030 systemd[1]: run-netns-ovnmeta\x2dbdc26f36\x2d19a2\x2d41f9\x2d8f78\x2d61503fbb20a7.mount: Deactivated successfully.
Oct  2 08:15:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:32.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:15:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:32.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:15:33 np0005466030 nova_compute[230518]: 2025-10-02 12:15:33.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Oct  2 08:15:33 np0005466030 nova_compute[230518]: 2025-10-02 12:15:33.278 2 INFO nova.virt.libvirt.driver [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Snapshot image upload complete#033[00m
Oct  2 08:15:33 np0005466030 nova_compute[230518]: 2025-10-02 12:15:33.278 2 INFO nova.compute.manager [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 6.28 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.064 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.072 2 DEBUG nova.compute.manager [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.072 2 DEBUG oslo_concurrency.lockutils [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.073 2 DEBUG oslo_concurrency.lockutils [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.073 2 DEBUG oslo_concurrency.lockutils [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.074 2 DEBUG nova.compute.manager [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] No waiting events found dispatching network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.074 2 WARNING nova.compute.manager [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received unexpected event network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.505 2 DEBUG nova.network.neutron [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.526 2 INFO nova.compute.manager [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Took 2.20 seconds to deallocate network for instance.#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.571 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.571 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:34 np0005466030 nova_compute[230518]: 2025-10-02 12:15:34.779 2 DEBUG oslo_concurrency.processutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:34.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:35 np0005466030 nova_compute[230518]: 2025-10-02 12:15:35.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3853434744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:35 np0005466030 nova_compute[230518]: 2025-10-02 12:15:35.233 2 DEBUG oslo_concurrency.processutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:35 np0005466030 nova_compute[230518]: 2025-10-02 12:15:35.238 2 DEBUG nova.compute.provider_tree [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:35 np0005466030 nova_compute[230518]: 2025-10-02 12:15:35.252 2 DEBUG nova.scheduler.client.report [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:35 np0005466030 nova_compute[230518]: 2025-10-02 12:15:35.277 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:35 np0005466030 nova_compute[230518]: 2025-10-02 12:15:35.314 2 INFO nova.scheduler.client.report [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Deleted allocations for instance 2b86a484-6fc6-4efa-983f-fb93053b0874#033[00m
Oct  2 08:15:35 np0005466030 nova_compute[230518]: 2025-10-02 12:15:35.419 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Oct  2 08:15:36 np0005466030 nova_compute[230518]: 2025-10-02 12:15:36.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:36 np0005466030 nova_compute[230518]: 2025-10-02 12:15:36.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:36 np0005466030 nova_compute[230518]: 2025-10-02 12:15:36.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:36 np0005466030 nova_compute[230518]: 2025-10-02 12:15:36.742 2 DEBUG nova.compute.manager [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:36 np0005466030 nova_compute[230518]: 2025-10-02 12:15:36.798 2 INFO nova.compute.manager [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] instance snapshotting#033[00m
Oct  2 08:15:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:36.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:37 np0005466030 nova_compute[230518]: 2025-10-02 12:15:37.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:37 np0005466030 nova_compute[230518]: 2025-10-02 12:15:37.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:37 np0005466030 nova_compute[230518]: 2025-10-02 12:15:37.195 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:37 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:37Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:6f:ab 10.100.0.13
Oct  2 08:15:37 np0005466030 ovn_controller[129257]: 2025-10-02T12:15:37Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:6f:ab 10.100.0.13
Oct  2 08:15:37 np0005466030 nova_compute[230518]: 2025-10-02 12:15:37.322 2 INFO nova.virt.libvirt.driver [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Beginning live snapshot process#033[00m
Oct  2 08:15:37 np0005466030 nova_compute[230518]: 2025-10-02 12:15:37.468 2 DEBUG nova.virt.libvirt.imagebackend [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:15:37 np0005466030 nova_compute[230518]: 2025-10-02 12:15:37.710 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(330bd4c8e7db498babe577123d235fe2) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:15:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.095 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.096 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.124 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.124 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.125 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.125 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3304084597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.615 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.766 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.767 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.770 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.770 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.773 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.773 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:15:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:38.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.928 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4277MB free_disk=20.866817474365234GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.930 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:38 np0005466030 nova_compute[230518]: 2025-10-02 12:15:38.931 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:38.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.004 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.004 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bb6a3b63-8cda-41b6-ac43-6f9d310fad2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.004 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 01eee71c-078c-41f4-a1c1-4591cab7195e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.005 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.005 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.054 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Oct  2 08:15:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2642791850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.501 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.507 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.532 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.554 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.555 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.555 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.556 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:15:39 np0005466030 nova_compute[230518]: 2025-10-02 12:15:39.570 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:15:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:15:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3265 syncs, 3.60 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5392 writes, 20K keys, 5392 commit groups, 1.0 writes per commit group, ingest: 23.58 MB, 0.04 MB/s#012Interval WAL: 5392 writes, 2136 syncs, 2.52 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.227 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.228 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.256 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.313 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.314 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.319 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.319 2 INFO nova.compute.claims [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.460 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.527 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:15:40 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4150019923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:15:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.899 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.904 2 DEBUG nova.compute.provider_tree [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.937 2 DEBUG nova.scheduler.client.report [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.982 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:40 np0005466030 nova_compute[230518]: 2025-10-02 12:15:40.983 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:15:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:41.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.035 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.035 2 DEBUG nova.network.neutron [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.054 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.070 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.154 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.156 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.157 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Creating image(s)#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.187 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.215 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.243 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.246 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.307 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.308 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.309 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.310 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.335 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.339 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.692 2 DEBUG nova.network.neutron [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.693 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.824 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] cloning vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk@330bd4c8e7db498babe577123d235fe2 to images/28a7bd8e-0ddc-4cda-9f64-7c1162716074 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:15:41 np0005466030 nova_compute[230518]: 2025-10-02 12:15:41.965 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.023 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] resizing rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.108 2 DEBUG nova.objects.instance [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.121 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.121 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Ensure instance console log exists: /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.122 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.122 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.123 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.125 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.130 2 WARNING nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.134 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.134 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.137 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.137 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.139 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.139 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:15:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bba9bb99-43dc-47a6-9261-f8d87f6d4f9b',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-525834944',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.139 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.141 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.141 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.141 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.142 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.142 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.145 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.521 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407327.5200894, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.522 2 INFO nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:15:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1971211061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.552 2 DEBUG nova.compute.manager [None req-f43bd273-936c-4b3c-9f0e-39a0d34460df - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.573 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.596 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:42 np0005466030 nova_compute[230518]: 2025-10-02 12:15:42.600 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999983s ======
Oct  2 08:15:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999983s
Oct  2 08:15:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:43.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:15:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/623290968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.034 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.036 2 DEBUG nova.objects.instance [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:43.042 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:15:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:43.043 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:15:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:15:43.043 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.058 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <uuid>80f9c3a4-aadc-4519-a451-8ce36d37b598</uuid>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <name>instance-00000018</name>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <nova:name>tempest-MigrationsAdminTest-server-201463142</nova:name>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:15:42</nova:creationTime>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <nova:flavor name="tempest-test_resize_flavor_-525834944">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <entry name="serial">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <entry name="uuid">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log" append="off"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:15:43 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:15:43 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:15:43 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:15:43 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.132 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.133 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.134 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Using config drive#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.169 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.587 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Creating config drive at /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.592 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdy3d4_jn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.719 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdy3d4_jn" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.753 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.758 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.947 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:43 np0005466030 nova_compute[230518]: 2025-10-02 12:15:43.948 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deleting local config drive /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config because it was imported into RBD.#033[00m
Oct  2 08:15:44 np0005466030 systemd-machined[188247]: New machine qemu-13-instance-00000018.
Oct  2 08:15:44 np0005466030 systemd[1]: Started Virtual Machine qemu-13-instance-00000018.
Oct  2 08:15:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid 01eee71c-078c-41f4-a1c1-4591cab7195e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.729 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.729 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.729 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.730 2 INFO nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] During sync_power_state the instance has a pending task (image_uploading). Skip.#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.794 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.795 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.895 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407344.8951168, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.896 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.898 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.898 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.901 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance spawned successfully.#033[00m
Oct  2 08:15:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.901 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:15:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.921 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.921 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.922 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.922 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.923 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.923 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.926 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.928 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.987 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.987 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407344.8957467, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:44 np0005466030 nova_compute[230518]: 2025-10-02 12:15:44.988 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Started (Lifecycle Event)#033[00m
Oct  2 08:15:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:45.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.013 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.016 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.019 2 INFO nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 3.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.019 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.052 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.101 2 INFO nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 4.80 seconds to build instance.#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.117 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.117 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.118 2 INFO nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:15:45 np0005466030 nova_compute[230518]: 2025-10-02 12:15:45.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:15:46 np0005466030 nova_compute[230518]: 2025-10-02 12:15:46.627 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] flattening images/28a7bd8e-0ddc-4cda-9f64-7c1162716074 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:15:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:47.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:47 np0005466030 nova_compute[230518]: 2025-10-02 12:15:47.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:47 np0005466030 nova_compute[230518]: 2025-10-02 12:15:47.175 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407331.6597059, 2b86a484-6fc6-4efa-983f-fb93053b0874 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:15:47 np0005466030 nova_compute[230518]: 2025-10-02 12:15:47.175 2 INFO nova.compute.manager [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:15:47 np0005466030 nova_compute[230518]: 2025-10-02 12:15:47.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:15:47 np0005466030 nova_compute[230518]: 2025-10-02 12:15:47.200 2 DEBUG nova.compute.manager [None req-5f076fb4-99dd-4052-a2e7-8a143953df25 - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:15:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Oct  2 08:15:50 np0005466030 nova_compute[230518]: 2025-10-02 12:15:50.311 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:15:50 np0005466030 nova_compute[230518]: 2025-10-02 12:15:50.312 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:15:50 np0005466030 nova_compute[230518]: 2025-10-02 12:15:50.312 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:15:50 np0005466030 nova_compute[230518]: 2025-10-02 12:15:50.334 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] removing snapshot(330bd4c8e7db498babe577123d235fe2) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:15:50 np0005466030 podman[240997]: 2025-10-02 12:15:50.8224703 +0000 UTC m=+0.065289524 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:15:50 np0005466030 podman[240996]: 2025-10-02 12:15:50.851939557 +0000 UTC m=+0.099316375 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  2 08:15:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:50.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:51 np0005466030 nova_compute[230518]: 2025-10-02 12:15:51.162 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:15:51 np0005466030 nova_compute[230518]: 2025-10-02 12:15:51.580 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:15:51 np0005466030 nova_compute[230518]: 2025-10-02 12:15:51.596 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:15:51 np0005466030 nova_compute[230518]: 2025-10-02 12:15:51.718 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:15:51 np0005466030 nova_compute[230518]: 2025-10-02 12:15:51.719 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Creating file /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:15:51 np0005466030 nova_compute[230518]: 2025-10-02 12:15:51.719 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:51 np0005466030 nova_compute[230518]: 2025-10-02 12:15:51.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Oct  2 08:15:52 np0005466030 nova_compute[230518]: 2025-10-02 12:15:52.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:52 np0005466030 nova_compute[230518]: 2025-10-02 12:15:52.348 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp" returned: 1 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:52 np0005466030 nova_compute[230518]: 2025-10-02 12:15:52.348 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:15:52 np0005466030 nova_compute[230518]: 2025-10-02 12:15:52.349 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Creating directory /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:15:52 np0005466030 nova_compute[230518]: 2025-10-02 12:15:52.349 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:15:52 np0005466030 nova_compute[230518]: 2025-10-02 12:15:52.613 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:15:52 np0005466030 nova_compute[230518]: 2025-10-02 12:15:52.617 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:15:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:53 np0005466030 nova_compute[230518]: 2025-10-02 12:15:53.156 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(snap) on rbd image(28a7bd8e-0ddc-4cda-9f64-7c1162716074) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:15:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Oct  2 08:15:56 np0005466030 nova_compute[230518]: 2025-10-02 12:15:56.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:15:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:15:57 np0005466030 nova_compute[230518]: 2025-10-02 12:15:57.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:15:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:15:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Oct  2 08:15:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:15:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:15:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:15:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:00 np0005466030 nova_compute[230518]: 2025-10-02 12:16:00.438 2 INFO nova.virt.libvirt.driver [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Snapshot image upload complete#033[00m
Oct  2 08:16:00 np0005466030 nova_compute[230518]: 2025-10-02 12:16:00.439 2 INFO nova.compute.manager [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 23.64 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:16:00 np0005466030 podman[241059]: 2025-10-02 12:16:00.825754283 +0000 UTC m=+0.076862049 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  2 08:16:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:01 np0005466030 nova_compute[230518]: 2025-10-02 12:16:01.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:02 np0005466030 nova_compute[230518]: 2025-10-02 12:16:02.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:02 np0005466030 nova_compute[230518]: 2025-10-02 12:16:02.656 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:16:02 np0005466030 podman[241079]: 2025-10-02 12:16:02.808177332 +0000 UTC m=+0.053230635 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:16:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:05.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.346 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "ce696fa7-391a-4679-a805-f85d85077164" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.346 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.369 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.455 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.456 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.463 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.464 2 INFO nova.compute.claims [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:16:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Oct  2 08:16:05 np0005466030 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct  2 08:16:05 np0005466030 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000018.scope: Consumed 15.175s CPU time.
Oct  2 08:16:05 np0005466030 systemd-machined[188247]: Machine qemu-13-instance-00000018 terminated.
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.748 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.779 2 INFO nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.786 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance destroyed successfully.#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.790 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.790 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.883 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.884 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:05 np0005466030 nova_compute[230518]: 2025-10-02 12:16:05.885 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/70985564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.191 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.196 2 DEBUG nova.compute.provider_tree [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.249 2 DEBUG nova.scheduler.client.report [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.292 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.303 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.304 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.317 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.384 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.384 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.465 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.465 2 DEBUG nova.network.neutron [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.491 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.511 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.679 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.680 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.680 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Creating image(s)#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.706 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.739 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.770 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.773 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.802 2 DEBUG nova.network.neutron [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.803 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.870 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.870 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.871 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.871 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.913 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:06 np0005466030 nova_compute[230518]: 2025-10-02 12:16:06.917 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ce696fa7-391a-4679-a805-f85d85077164_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:06.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:07.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:07 np0005466030 nova_compute[230518]: 2025-10-02 12:16:07.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Oct  2 08:16:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:09 np0005466030 nova_compute[230518]: 2025-10-02 12:16:09.480 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ce696fa7-391a-4679-a805-f85d85077164_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:09 np0005466030 nova_compute[230518]: 2025-10-02 12:16:09.561 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] resizing rbd image ce696fa7-391a-4679-a805-f85d85077164_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.044 2 DEBUG nova.objects.instance [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'migration_context' on Instance uuid ce696fa7-391a-4679-a805-f85d85077164 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.069 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.070 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Ensure instance console log exists: /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.070 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.071 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.071 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.072 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.077 2 WARNING nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.082 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.083 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.086 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.087 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.088 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.088 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.089 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.089 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.089 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.091 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.091 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.091 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.094 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:16:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5075 writes, 26K keys, 5075 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5075 writes, 5075 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1573 writes, 7655 keys, 1573 commit groups, 1.0 writes per commit group, ingest: 16.04 MB, 0.03 MB/s#012Interval WAL: 1573 writes, 1573 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.9      0.40              0.09        14    0.028       0      0       0.0       0.0#012  L6      1/0    8.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    142.0    117.9      0.92              0.28        13    0.071     61K   6843       0.0       0.0#012 Sum      1/0    8.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     99.4    105.9      1.32              0.37        27    0.049     61K   6843       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   5.2    106.8    109.4      0.47              0.12        10    0.047     25K   2535       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    142.0    117.9      0.92              0.28        13    0.071     61K   6843       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     78.3      0.39              0.09        13    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 1.3 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 12.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000203 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(711,11.76 MB,3.86995%) FilterBlock(27,176.05 KB,0.0565529%) IndexBlock(27,329.98 KB,0.106003%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:16:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1338855465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.553 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.581 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:10 np0005466030 nova_compute[230518]: 2025-10-02 12:16:10.587 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3250954769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.081 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.083 2 DEBUG nova.objects.instance [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce696fa7-391a-4679-a805-f85d85077164 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.108 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <uuid>ce696fa7-391a-4679-a805-f85d85077164</uuid>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <name>instance-0000001c</name>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1094595277-2</nova:name>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:16:10</nova:creationTime>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <nova:user uuid="27279919e67c49e1a04b6eec249ecc87">tempest-ServersOnMultiNodesTest-348944321-project-member</nova:user>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <nova:project uuid="a5ac6058475f4875b46ae8f3c4ff33e8">tempest-ServersOnMultiNodesTest-348944321</nova:project>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <entry name="serial">ce696fa7-391a-4679-a805-f85d85077164</entry>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <entry name="uuid">ce696fa7-391a-4679-a805-f85d85077164</entry>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/ce696fa7-391a-4679-a805-f85d85077164_disk">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/ce696fa7-391a-4679-a805-f85d85077164_disk.config">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/console.log" append="off"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:16:11 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:16:11 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:16:11 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:16:11 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.172 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.172 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.173 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Using config drive#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.208 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.390 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Creating config drive at /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.394 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp__tic_yv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.529 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp__tic_yv" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.564 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.567 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config ce696fa7-391a-4679-a805-f85d85077164_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.646 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.647 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.648 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.648 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.649 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.650 2 INFO nova.compute.manager [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Terminating instance#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.651 2 DEBUG nova.compute.manager [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.718 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config ce696fa7-391a-4679-a805-f85d85077164_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.719 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deleting local config drive /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config because it was imported into RBD.#033[00m
Oct  2 08:16:11 np0005466030 kernel: tap0a7827d1-d2 (unregistering): left promiscuous mode
Oct  2 08:16:11 np0005466030 NetworkManager[44960]: <info>  [1759407371.7402] device (tap0a7827d1-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:16:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:11Z|00111|binding|INFO|Releasing lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 from this chassis (sb_readonly=0)
Oct  2 08:16:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:11Z|00112|binding|INFO|Setting lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 down in Southbound
Oct  2 08:16:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:11Z|00113|binding|INFO|Removing iface tap0a7827d1-d2 ovn-installed in OVS
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.772 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6f:ab 10.100.0.13'], port_security=['fa:16:3e:cc:6f:ab 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '01eee71c-078c-41f4-a1c1-4591cab7195e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd92e60d304e64805972937813fc99606', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13422694-ff96-4d03-9ea0-adedb130ec76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cf5d7d6-9d03-4d57-a5e5-97ce6dc98b2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=0a7827d1-d2e0-4330-b738-ee929dc7af48) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.773 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 0a7827d1-d2e0-4330-b738-ee929dc7af48 in datapath 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 unbound from our chassis#033[00m
Oct  2 08:16:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.775 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:16:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.776 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[99b272dd-ab27-44ae-9dc5-02cf87c1581b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.776 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 namespace which is not needed anymore#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:11 np0005466030 systemd-machined[188247]: New machine qemu-14-instance-0000001c.
Oct  2 08:16:11 np0005466030 systemd[1]: Started Virtual Machine qemu-14-instance-0000001c.
Oct  2 08:16:11 np0005466030 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct  2 08:16:11 np0005466030 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Consumed 14.356s CPU time.
Oct  2 08:16:11 np0005466030 systemd-machined[188247]: Machine qemu-11-instance-00000016 terminated.
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.882 2 INFO nova.virt.libvirt.driver [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance destroyed successfully.#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.884 2 DEBUG nova.objects.instance [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lazy-loading 'resources' on Instance uuid 01eee71c-078c-41f4-a1c1-4591cab7195e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:11 np0005466030 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : haproxy version is 2.8.14-c23fe91
Oct  2 08:16:11 np0005466030 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : path to executable is /usr/sbin/haproxy
Oct  2 08:16:11 np0005466030 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [WARNING]  (239840) : Exiting Master process...
Oct  2 08:16:11 np0005466030 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [ALERT]    (239840) : Current worker (239842) exited with code 143 (Terminated)
Oct  2 08:16:11 np0005466030 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [WARNING]  (239840) : All workers exited. Exiting... (0)
Oct  2 08:16:11 np0005466030 systemd[1]: libpod-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe.scope: Deactivated successfully.
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.907 2 DEBUG nova.virt.libvirt.vif [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:15:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1959096416',display_name='tempest-ImagesOneServerTestJSON-server-1959096416',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1959096416',id=22,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:15:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d92e60d304e64805972937813fc99606',ramdisk_id='',reservation_id='r-nww065mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-572210404',owner_user_name='tempest-ImagesOneServerTestJSON-572210404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:00Z,user_data=None,user_id='79b88925d1704f5c9b3d2114c1a9ae4f',uuid=01eee71c-078c-41f4-a1c1-4591cab7195e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.907 2 DEBUG nova.network.os_vif_util [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converting VIF {"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.908 2 DEBUG nova.network.os_vif_util [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.909 2 DEBUG os_vif [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a7827d1-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:11 np0005466030 podman[241450]: 2025-10-02 12:16:11.913213853 +0000 UTC m=+0.048608840 container died c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:16:11 np0005466030 nova_compute[230518]: 2025-10-02 12:16:11.916 2 INFO os_vif [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2')#033[00m
Oct  2 08:16:11 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe-userdata-shm.mount: Deactivated successfully.
Oct  2 08:16:11 np0005466030 systemd[1]: var-lib-containers-storage-overlay-71439b31039a617ef0686867304ef27ca8200532d38131be38c55e1cda6f86ef-merged.mount: Deactivated successfully.
Oct  2 08:16:11 np0005466030 podman[241450]: 2025-10-02 12:16:11.958456616 +0000 UTC m=+0.093851603 container cleanup c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:16:11 np0005466030 systemd[1]: libpod-conmon-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe.scope: Deactivated successfully.
Oct  2 08:16:12 np0005466030 podman[241505]: 2025-10-02 12:16:12.017593666 +0000 UTC m=+0.039442611 container remove c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.023 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fe1998-8eda-48e7-acd3-7168c9b562fb]: (4, ('Thu Oct  2 12:16:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 (c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe)\nc62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe\nThu Oct  2 12:16:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 (c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe)\nc62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.024 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9574594-66ba-4b22-8d1e-382f3246e381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.025 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap377fcfd9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:12 np0005466030 kernel: tap377fcfd9-a0: left promiscuous mode
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.053 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[104488f1-530e-435e-9592-dc20ef5f3e33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.079 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[039395ca-4c05-4cc5-9168-bb90629acfe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b743b5-97a2-4be1-a5b2-d719e4c8cbcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.097 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[58c4ba06-d412-463f-b3ee-a8a8bf3deb0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518695, 'reachable_time': 40754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241560, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:12 np0005466030 systemd[1]: run-netns-ovnmeta\x2d377fcfd9\x2da6d0\x2d4567\x2dbd23\x2da9d9c96adbd5.mount: Deactivated successfully.
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.102 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:16:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.102 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[45a774b5-cb35-4c72-bbe7-5eddaa252ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.571 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407372.5715215, ce696fa7-391a-4679-a805-f85d85077164 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.572 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.575 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.575 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.578 2 INFO nova.virt.libvirt.driver [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance spawned successfully.#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.578 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.624 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.628 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.628 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.629 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.630 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.630 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.631 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.635 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.684 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.684 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407372.5725222, ce696fa7-391a-4679-a805-f85d85077164 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.684 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] VM Started (Lifecycle Event)#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.722 2 INFO nova.virt.libvirt.driver [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deleting instance files /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e_del#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.723 2 INFO nova.virt.libvirt.driver [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deletion of /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e_del complete#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.729 2 INFO nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 6.05 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.729 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.731 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.736 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.777 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.786 2 DEBUG nova.compute.manager [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-unplugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG oslo_concurrency.lockutils [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG oslo_concurrency.lockutils [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG oslo_concurrency.lockutils [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG nova.compute.manager [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] No waiting events found dispatching network-vif-unplugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG nova.compute.manager [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-unplugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.809 2 INFO nova.compute.manager [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.809 2 DEBUG oslo.service.loopingcall [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.809 2 DEBUG nova.compute.manager [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.810 2 DEBUG nova.network.neutron [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.812 2 INFO nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 7.39 seconds to build instance.#033[00m
Oct  2 08:16:12 np0005466030 nova_compute[230518]: 2025-10-02 12:16:12.850 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:13.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Oct  2 08:16:13 np0005466030 nova_compute[230518]: 2025-10-02 12:16:13.826 2 DEBUG nova.network.neutron [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:13 np0005466030 nova_compute[230518]: 2025-10-02 12:16:13.864 2 DEBUG nova.compute.manager [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-deleted-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:13 np0005466030 nova_compute[230518]: 2025-10-02 12:16:13.864 2 INFO nova.compute.manager [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Neutron deleted interface 0a7827d1-d2e0-4330-b738-ee929dc7af48; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:16:13 np0005466030 nova_compute[230518]: 2025-10-02 12:16:13.865 2 DEBUG nova.network.neutron [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:13 np0005466030 nova_compute[230518]: 2025-10-02 12:16:13.953 2 INFO nova.compute.manager [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 1.14 seconds to deallocate network for instance.#033[00m
Oct  2 08:16:13 np0005466030 nova_compute[230518]: 2025-10-02 12:16:13.962 2 DEBUG nova.compute.manager [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Detach interface failed, port_id=0a7827d1-d2e0-4330-b738-ee929dc7af48, reason: Instance 01eee71c-078c-41f4-a1c1-4591cab7195e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.106 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.107 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.244 2 DEBUG oslo_concurrency.processutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/540730844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.681 2 DEBUG oslo_concurrency.processutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.686 2 DEBUG nova.compute.provider_tree [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.719 2 DEBUG nova.scheduler.client.report [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.744 2 INFO nova.compute.manager [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Swapping old allocation on dict_keys(['730da6ce-9754-46f0-88e3-0019d056443f']) held by migration e3ae1389-09cd-481c-9d83-8b061ca8b765 for instance#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.754 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.810 2 DEBUG nova.scheduler.client.report [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Overwriting current allocation {'allocations': {'8733289a-aa77-4139-9e88-bac686174c8d': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 15}}, 'project_id': '3d306048f2854052ba5317253b834aa7', 'user_id': 'ac1b39d94ed94e2490ad953afb3c225f', 'consumer_generation': 1} on consumer 80f9c3a4-aadc-4519-a451-8ce36d37b598 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.813 2 INFO nova.scheduler.client.report [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Deleted allocations for instance 01eee71c-078c-41f4-a1c1-4591cab7195e#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.898 2 DEBUG nova.compute.manager [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.899 2 DEBUG oslo_concurrency.lockutils [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.899 2 DEBUG oslo_concurrency.lockutils [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.900 2 DEBUG oslo_concurrency.lockutils [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.900 2 DEBUG nova.compute.manager [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] No waiting events found dispatching network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.900 2 WARNING nova.compute.manager [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received unexpected event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:16:14 np0005466030 nova_compute[230518]: 2025-10-02 12:16:14.938 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.058 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.058 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.058 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.174 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.406 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.441 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.442 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.471 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "ce696fa7-391a-4679-a805-f85d85077164" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.472 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.473 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "ce696fa7-391a-4679-a805-f85d85077164-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.473 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.474 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.475 2 INFO nova.compute.manager [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Terminating instance#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.477 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "refresh_cache-ce696fa7-391a-4679-a805-f85d85077164" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.477 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquired lock "refresh_cache-ce696fa7-391a-4679-a805-f85d85077164" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.478 2 DEBUG nova.network.neutron [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.525 2 DEBUG nova.storage.rbd_utils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rolling back rbd image(80f9c3a4-aadc-4519-a451-8ce36d37b598_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Oct  2 08:16:15 np0005466030 nova_compute[230518]: 2025-10-02 12:16:15.705 2 DEBUG nova.network.neutron [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.032 2 DEBUG nova.network.neutron [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.054 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Releasing lock "refresh_cache-ce696fa7-391a-4679-a805-f85d85077164" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.054 2 DEBUG nova.compute.manager [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.116 2 DEBUG nova.storage.rbd_utils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] removing snapshot(nova-resize) on rbd image(80f9c3a4-aadc-4519-a451-8ce36d37b598_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:16:16 np0005466030 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct  2 08:16:16 np0005466030 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Consumed 4.274s CPU time.
Oct  2 08:16:16 np0005466030 systemd-machined[188247]: Machine qemu-14-instance-0000001c terminated.
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.476 2 INFO nova.virt.libvirt.driver [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance destroyed successfully.#033[00m
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.477 2 DEBUG nova.objects.instance [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'resources' on Instance uuid ce696fa7-391a-4679-a805-f85d85077164 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:16 np0005466030 nova_compute[230518]: 2025-10-02 12:16:16.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:16.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:17 np0005466030 podman[242050]: 2025-10-02 12:16:17.147787956 +0000 UTC m=+0.040352361 container create 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  2 08:16:17 np0005466030 systemd[1]: Started libpod-conmon-73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62.scope.
Oct  2 08:16:17 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:16:17 np0005466030 podman[242050]: 2025-10-02 12:16:17.2175514 +0000 UTC m=+0.110115805 container init 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 08:16:17 np0005466030 podman[242050]: 2025-10-02 12:16:17.225035165 +0000 UTC m=+0.117599570 container start 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  2 08:16:17 np0005466030 podman[242050]: 2025-10-02 12:16:17.131248595 +0000 UTC m=+0.023813020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:16:17 np0005466030 podman[242050]: 2025-10-02 12:16:17.227703359 +0000 UTC m=+0.120267794 container attach 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 08:16:17 np0005466030 beautiful_cori[242066]: 167 167
Oct  2 08:16:17 np0005466030 systemd[1]: libpod-73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62.scope: Deactivated successfully.
Oct  2 08:16:17 np0005466030 podman[242050]: 2025-10-02 12:16:17.230956031 +0000 UTC m=+0.123520436 container died 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:16:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay-5b6e36be807881c84d3f7aba8ab500306c5c7ec5674520b22a7b8214f9fcc011-merged.mount: Deactivated successfully.
Oct  2 08:16:17 np0005466030 podman[242050]: 2025-10-02 12:16:17.274689237 +0000 UTC m=+0.167253642 container remove 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 08:16:17 np0005466030 systemd[1]: libpod-conmon-73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62.scope: Deactivated successfully.
Oct  2 08:16:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Oct  2 08:16:17 np0005466030 podman[242090]: 2025-10-02 12:16:17.422132325 +0000 UTC m=+0.038372699 container create 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 08:16:17 np0005466030 systemd[1]: Started libpod-conmon-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope.
Oct  2 08:16:17 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:16:17 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 08:16:17 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 08:16:17 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 08:16:17 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 08:16:17 np0005466030 podman[242090]: 2025-10-02 12:16:17.405867052 +0000 UTC m=+0.022107446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:16:17 np0005466030 podman[242090]: 2025-10-02 12:16:17.50658566 +0000 UTC m=+0.122826054 container init 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:16:17 np0005466030 podman[242090]: 2025-10-02 12:16:17.512418234 +0000 UTC m=+0.128658608 container start 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 08:16:17 np0005466030 podman[242090]: 2025-10-02 12:16:17.516263534 +0000 UTC m=+0.132503918 container attach 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.690 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.695 2 WARNING nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.699 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.700 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.703 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.703 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.704 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.704 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:15:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bba9bb99-43dc-47a6-9261-f8d87f6d4f9b',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-525834944',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.707 2 DEBUG nova.objects.instance [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:17 np0005466030 nova_compute[230518]: 2025-10-02 12:16:17.733 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/164602721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:18 np0005466030 nova_compute[230518]: 2025-10-02 12:16:18.148 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:18 np0005466030 nova_compute[230518]: 2025-10-02 12:16:18.183 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:16:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1953203193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]: [
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:    {
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "available": false,
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "ceph_device": false,
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "lsm_data": {},
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "lvs": [],
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "path": "/dev/sr0",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "rejected_reasons": [
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "Insufficient space (<5GB)",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "Has a FileSystem"
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        ],
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        "sys_api": {
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "actuators": null,
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "device_nodes": "sr0",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "devname": "sr0",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "human_readable_size": "482.00 KB",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "id_bus": "ata",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "model": "QEMU DVD-ROM",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "nr_requests": "2",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "parent": "/dev/sr0",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "partitions": {},
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "path": "/dev/sr0",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "removable": "1",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "rev": "2.5+",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "ro": "0",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "rotational": "0",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "sas_address": "",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "sas_device_handle": "",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "scheduler_mode": "mq-deadline",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "sectors": 0,
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "sectorsize": "2048",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "size": 493568.0,
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "support_discard": "2048",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "type": "disk",
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:            "vendor": "QEMU"
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:        }
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]:    }
Oct  2 08:16:18 np0005466030 condescending_jackson[242106]: ]
Oct  2 08:16:18 np0005466030 systemd[1]: libpod-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope: Deactivated successfully.
Oct  2 08:16:18 np0005466030 systemd[1]: libpod-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope: Consumed 1.164s CPU time.
Oct  2 08:16:18 np0005466030 podman[242090]: 2025-10-02 12:16:18.70557035 +0000 UTC m=+1.321810744 container died 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:16:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39-merged.mount: Deactivated successfully.
Oct  2 08:16:18 np0005466030 nova_compute[230518]: 2025-10-02 12:16:18.733 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:18 np0005466030 nova_compute[230518]: 2025-10-02 12:16:18.736 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <uuid>80f9c3a4-aadc-4519-a451-8ce36d37b598</uuid>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <name>instance-00000018</name>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <nova:name>tempest-MigrationsAdminTest-server-201463142</nova:name>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:16:17</nova:creationTime>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <nova:flavor name="tempest-test_resize_flavor_-525834944">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <entry name="serial">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <entry name="uuid">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log" append="off"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:16:18 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:16:18 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:16:18 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:16:18 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:16:18 np0005466030 podman[242090]: 2025-10-02 12:16:18.758345209 +0000 UTC m=+1.374585583 container remove 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:16:18 np0005466030 systemd[1]: libpod-conmon-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope: Deactivated successfully.
Oct  2 08:16:18 np0005466030 systemd-machined[188247]: New machine qemu-15-instance-00000018.
Oct  2 08:16:18 np0005466030 systemd[1]: Started Virtual Machine qemu-15-instance-00000018.
Oct  2 08:16:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.648 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 80f9c3a4-aadc-4519-a451-8ce36d37b598 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.648 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407379.647571, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.648 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.650 2 DEBUG nova.compute.manager [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.653 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance running successfully.#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.654 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.675 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.678 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.693 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.694 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407379.6485198, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.694 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Started (Lifecycle Event)#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.725 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.728 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.751 2 INFO nova.compute.manager [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance to original state: 'active'#033[00m
Oct  2 08:16:19 np0005466030 nova_compute[230518]: 2025-10-02 12:16:19.755 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Oct  2 08:16:20 np0005466030 nova_compute[230518]: 2025-10-02 12:16:20.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:21 np0005466030 nova_compute[230518]: 2025-10-02 12:16:21.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:21 np0005466030 podman[243543]: 2025-10-02 12:16:21.816094169 +0000 UTC m=+0.058902144 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:16:21 np0005466030 podman[243542]: 2025-10-02 12:16:21.852010538 +0000 UTC m=+0.094674279 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:16:21 np0005466030 nova_compute[230518]: 2025-10-02 12:16:21.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:22 np0005466030 nova_compute[230518]: 2025-10-02 12:16:22.969 2 INFO nova.virt.libvirt.driver [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deleting instance files /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164_del#033[00m
Oct  2 08:16:22 np0005466030 nova_compute[230518]: 2025-10-02 12:16:22.969 2 INFO nova.virt.libvirt.driver [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deletion of /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164_del complete#033[00m
Oct  2 08:16:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:23.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:23 np0005466030 nova_compute[230518]: 2025-10-02 12:16:23.225 2 INFO nova.compute.manager [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 7.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:16:23 np0005466030 nova_compute[230518]: 2025-10-02 12:16:23.225 2 DEBUG oslo.service.loopingcall [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:16:23 np0005466030 nova_compute[230518]: 2025-10-02 12:16:23.225 2 DEBUG nova.compute.manager [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:16:23 np0005466030 nova_compute[230518]: 2025-10-02 12:16:23.225 2 DEBUG nova.network.neutron [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:16:23 np0005466030 nova_compute[230518]: 2025-10-02 12:16:23.684 2 DEBUG nova.network.neutron [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:23 np0005466030 nova_compute[230518]: 2025-10-02 12:16:23.810 2 DEBUG nova.network.neutron [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Oct  2 08:16:23 np0005466030 nova_compute[230518]: 2025-10-02 12:16:23.949 2 INFO nova.compute.manager [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.049 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.050 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.147 2 DEBUG oslo_concurrency.processutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3110913110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.632 2 DEBUG oslo_concurrency.processutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.638 2 DEBUG nova.compute.provider_tree [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.667 2 DEBUG nova.scheduler.client.report [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.688 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.725 2 INFO nova.scheduler.client.report [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Deleted allocations for instance ce696fa7-391a-4679-a805-f85d85077164#033[00m
Oct  2 08:16:24 np0005466030 nova_compute[230518]: 2025-10-02 12:16:24.799 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:16:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:16:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:25.915 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:16:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:16:26 np0005466030 nova_compute[230518]: 2025-10-02 12:16:26.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:26 np0005466030 nova_compute[230518]: 2025-10-02 12:16:26.880 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407371.8786888, 01eee71c-078c-41f4-a1c1-4591cab7195e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:26 np0005466030 nova_compute[230518]: 2025-10-02 12:16:26.881 2 INFO nova.compute.manager [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:16:26 np0005466030 nova_compute[230518]: 2025-10-02 12:16:26.913 2 DEBUG nova.compute.manager [None req-d88c6c71-d658-4175-82ea-4a82e087e44d - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:26 np0005466030 nova_compute[230518]: 2025-10-02 12:16:26.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:27.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:29.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:29.998 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:30 np0005466030 nova_compute[230518]: 2025-10-02 12:16:29.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:29.999 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:16:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:31.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:31 np0005466030 nova_compute[230518]: 2025-10-02 12:16:31.474 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407376.4730577, ce696fa7-391a-4679-a805-f85d85077164 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:31 np0005466030 nova_compute[230518]: 2025-10-02 12:16:31.475 2 INFO nova.compute.manager [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:16:31 np0005466030 nova_compute[230518]: 2025-10-02 12:16:31.528 2 DEBUG nova.compute.manager [None req-65beeeb1-44de-4937-9134-6bd759f84758 - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:31 np0005466030 nova_compute[230518]: 2025-10-02 12:16:31.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:31 np0005466030 podman[243608]: 2025-10-02 12:16:31.824892215 +0000 UTC m=+0.061403762 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 08:16:31 np0005466030 nova_compute[230518]: 2025-10-02 12:16:31.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:33.001 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:33 np0005466030 nova_compute[230518]: 2025-10-02 12:16:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:33 np0005466030 nova_compute[230518]: 2025-10-02 12:16:33.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:16:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:33.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:33 np0005466030 podman[243628]: 2025-10-02 12:16:33.800071497 +0000 UTC m=+0.054837646 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 08:16:34 np0005466030 nova_compute[230518]: 2025-10-02 12:16:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:36 np0005466030 nova_compute[230518]: 2025-10-02 12:16:36.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:36 np0005466030 nova_compute[230518]: 2025-10-02 12:16:36.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:37 np0005466030 nova_compute[230518]: 2025-10-02 12:16:37.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:37 np0005466030 nova_compute[230518]: 2025-10-02 12:16:37.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.317 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.317 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.318 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.318 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:16:38 np0005466030 nova_compute[230518]: 2025-10-02 12:16:38.594 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:16:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:38.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.082 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.233 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.233 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.234 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.234 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.235 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.365 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.366 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.366 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.366 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.367 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/762298545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.797 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.985 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.985 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.988 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.988 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.991 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:39 np0005466030 nova_compute[230518]: 2025-10-02 12:16:39.991 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.148 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.149 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4225MB free_disk=20.76431655883789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.149 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.149 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.307 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.307 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bb6a3b63-8cda-41b6-ac43-6f9d310fad2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.307 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 80f9c3a4-aadc-4519-a451-8ce36d37b598 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.319 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Creating tmpfile /var/lib/nova/instances/tmpgk1qnbid to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.319 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.347 2 WARNING nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a114d722-ceac-442e-8b38-c2892fda526b has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}.#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.347 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.347 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.500 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:16:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:16:40 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031066231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.936 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:16:40 np0005466030 nova_compute[230518]: 2025-10-02 12:16:40.943 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:16:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:40.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:41 np0005466030 nova_compute[230518]: 2025-10-02 12:16:41.004 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:16:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:41 np0005466030 nova_compute[230518]: 2025-10-02 12:16:41.123 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:16:41 np0005466030 nova_compute[230518]: 2025-10-02 12:16:41.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:41 np0005466030 nova_compute[230518]: 2025-10-02 12:16:41.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:41 np0005466030 nova_compute[230518]: 2025-10-02 12:16:41.936 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct  2 08:16:41 np0005466030 nova_compute[230518]: 2025-10-02 12:16:41.941 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:16:41 np0005466030 nova_compute[230518]: 2025-10-02 12:16:41.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:42 np0005466030 nova_compute[230518]: 2025-10-02 12:16:42.043 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:42 np0005466030 nova_compute[230518]: 2025-10-02 12:16:42.043 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:42 np0005466030 nova_compute[230518]: 2025-10-02 12:16:42.043 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:16:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:42.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.396 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.467 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.470 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.471 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Creating instance directory: /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.472 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Ensure instance console log exists: /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.472 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.474 2 DEBUG nova.virt.libvirt.vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:36Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.475 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.476 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.476 2 DEBUG os_vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap965edc3f-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap965edc3f-df, col_values=(('external_ids', {'iface-id': '965edc3f-df96-430d-8b4b-4f3dbb19e9de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:7b:22', 'vm-uuid': 'a114d722-ceac-442e-8b38-c2892fda526b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:43 np0005466030 NetworkManager[44960]: <info>  [1759407403.4875] manager: (tap965edc3f-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.501 2 INFO os_vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df')#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.502 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct  2 08:16:43 np0005466030 nova_compute[230518]: 2025-10-02 12:16:43.503 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct  2 08:16:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:44.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:45 np0005466030 nova_compute[230518]: 2025-10-02 12:16:45.423 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct  2 08:16:45 np0005466030 nova_compute[230518]: 2025-10-02 12:16:45.425 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct  2 08:16:45 np0005466030 kernel: tap965edc3f-df: entered promiscuous mode
Oct  2 08:16:45 np0005466030 NetworkManager[44960]: <info>  [1759407405.7580] manager: (tap965edc3f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Oct  2 08:16:45 np0005466030 nova_compute[230518]: 2025-10-02 12:16:45.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:45Z|00114|binding|INFO|Claiming lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de for this additional chassis.
Oct  2 08:16:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:45Z|00115|binding|INFO|965edc3f-df96-430d-8b4b-4f3dbb19e9de: Claiming fa:16:3e:bf:7b:22 10.100.0.10
Oct  2 08:16:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:45Z|00116|binding|INFO|Claiming lport 92466114-86f5-4a18-ad64-93c2127fe0d3 for this additional chassis.
Oct  2 08:16:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:45Z|00117|binding|INFO|92466114-86f5-4a18-ad64-93c2127fe0d3: Claiming fa:16:3e:bd:c1:f4 19.80.0.104
Oct  2 08:16:45 np0005466030 nova_compute[230518]: 2025-10-02 12:16:45.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005466030 systemd-udevd[243757]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:16:45 np0005466030 systemd-machined[188247]: New machine qemu-16-instance-0000001d.
Oct  2 08:16:45 np0005466030 NetworkManager[44960]: <info>  [1759407405.8074] device (tap965edc3f-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:16:45 np0005466030 NetworkManager[44960]: <info>  [1759407405.8085] device (tap965edc3f-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:16:45 np0005466030 systemd[1]: Started Virtual Machine qemu-16-instance-0000001d.
Oct  2 08:16:45 np0005466030 nova_compute[230518]: 2025-10-02 12:16:45.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:45Z|00118|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de ovn-installed in OVS
Oct  2 08:16:45 np0005466030 nova_compute[230518]: 2025-10-02 12:16:45.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:46 np0005466030 nova_compute[230518]: 2025-10-02 12:16:46.780 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407406.7805657, a114d722-ceac-442e-8b38-c2892fda526b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:46 np0005466030 nova_compute[230518]: 2025-10-02 12:16:46.781 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Started (Lifecycle Event)#033[00m
Oct  2 08:16:46 np0005466030 nova_compute[230518]: 2025-10-02 12:16:46.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:46 np0005466030 nova_compute[230518]: 2025-10-02 12:16:46.844 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:47.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:47 np0005466030 nova_compute[230518]: 2025-10-02 12:16:47.426 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407407.4256682, a114d722-ceac-442e-8b38-c2892fda526b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:16:47 np0005466030 nova_compute[230518]: 2025-10-02 12:16:47.427 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:16:47 np0005466030 nova_compute[230518]: 2025-10-02 12:16:47.472 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:47 np0005466030 nova_compute[230518]: 2025-10-02 12:16:47.475 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:16:47 np0005466030 nova_compute[230518]: 2025-10-02 12:16:47.520 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:16:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:48 np0005466030 nova_compute[230518]: 2025-10-02 12:16:48.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:16:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:49.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:16:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:49Z|00119|binding|INFO|Claiming lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de for this chassis.
Oct  2 08:16:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:49Z|00120|binding|INFO|965edc3f-df96-430d-8b4b-4f3dbb19e9de: Claiming fa:16:3e:bf:7b:22 10.100.0.10
Oct  2 08:16:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:49Z|00121|binding|INFO|Claiming lport 92466114-86f5-4a18-ad64-93c2127fe0d3 for this chassis.
Oct  2 08:16:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:49Z|00122|binding|INFO|92466114-86f5-4a18-ad64-93c2127fe0d3: Claiming fa:16:3e:bd:c1:f4 19.80.0.104
Oct  2 08:16:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:49Z|00123|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de up in Southbound
Oct  2 08:16:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:49Z|00124|binding|INFO|Setting lport 92466114-86f5-4a18-ad64-93c2127fe0d3 up in Southbound
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.499 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.501 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.503 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e bound to our chassis#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.504 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[edb67efb-b434-42c4-9ddc-31bda1f951b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.520 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5989958f-c1 in ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.522 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5989958f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.522 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dff152b-d22c-430f-9dbc-b4b87d0e6e12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.523 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[334fe021-f8b8-4696-9fc2-1550c4d01e57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.535 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2d534a-abd9-4cfe-98d6-69d39a60a2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.549 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e197cf1f-bab2-49a4-8240-77ed9841c04a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.578 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[63bb50c6-2ba5-4308-97f9-18f4bd0c4080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 NetworkManager[44960]: <info>  [1759407409.5849] manager: (tap5989958f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.584 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[195909ab-924a-4bd3-867e-980de2631990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 systemd-udevd[243816]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.617 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3e0798-f389-4e46-b49b-a1178418d34d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.620 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6237ffcf-c01b-406c-8eeb-7e3002fea4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 NetworkManager[44960]: <info>  [1759407409.6456] device (tap5989958f-c0): carrier: link connected
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.651 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[70f0cd3e-9b9a-4972-8c13-34a634a108ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.667 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0a289250-e8cf-4cf2-9332-12447dfc0335]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243835, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.682 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0468def5-bb0f-45bb-8a6e-03deaf142601]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:d212'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527320, 'tstamp': 527320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243836, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.696 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f521c02-d0a3-4465-9785-1f72c794df60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243837, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e57e5815-fc3f-49d6-832c-6b371ae529b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.783 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5b372f-c909-4c7f-8fce-02d3211277fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.786 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.786 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.786 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:49 np0005466030 nova_compute[230518]: 2025-10-02 12:16:49.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466030 NetworkManager[44960]: <info>  [1759407409.7889] manager: (tap5989958f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct  2 08:16:49 np0005466030 kernel: tap5989958f-c0: entered promiscuous mode
Oct  2 08:16:49 np0005466030 nova_compute[230518]: 2025-10-02 12:16:49.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.791 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:49 np0005466030 nova_compute[230518]: 2025-10-02 12:16:49.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:49Z|00125|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct  2 08:16:49 np0005466030 nova_compute[230518]: 2025-10-02 12:16:49.796 2 INFO nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Post operation of migration started#033[00m
Oct  2 08:16:49 np0005466030 nova_compute[230518]: 2025-10-02 12:16:49.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466030 nova_compute[230518]: 2025-10-02 12:16:49.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.811 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.812 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a62b4fd3-e638-49a0-b38c-552c1a698ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.813 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:16:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.814 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'env', 'PROCESS_TAG=haproxy-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5989958f-ccbb-4db4-8dcb-18563aa2418e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:16:50 np0005466030 podman[243869]: 2025-10-02 12:16:50.205415992 +0000 UTC m=+0.065479471 container create 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:16:50 np0005466030 podman[243869]: 2025-10-02 12:16:50.168337695 +0000 UTC m=+0.028401224 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:16:50 np0005466030 systemd[1]: Started libpod-conmon-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6.scope.
Oct  2 08:16:50 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:16:50 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8333f9bed9a3ff266e399a2d551e934a36972a2ab5ffe8727ce87e882998249b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:16:50 np0005466030 podman[243869]: 2025-10-02 12:16:50.303506346 +0000 UTC m=+0.163569875 container init 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 08:16:50 np0005466030 podman[243869]: 2025-10-02 12:16:50.310380693 +0000 UTC m=+0.170444162 container start 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:16:50 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : New worker (243890) forked
Oct  2 08:16:50 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : Loading success.
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.382 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.384 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.395 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc031e3-0ffd-46d3-aca0-b91a1cf52a10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.396 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdc4336bf-61 in ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.398 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdc4336bf-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.398 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fc4403-1902-4743-ac88-c3c507fb8e9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.399 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7571130f-3302-4783-9154-dd9d59956ebc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.410 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[80bc64f1-6e60-4de0-8880-af4af469a91d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 nova_compute[230518]: 2025-10-02 12:16:50.420 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:16:50 np0005466030 nova_compute[230518]: 2025-10-02 12:16:50.420 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:16:50 np0005466030 nova_compute[230518]: 2025-10-02 12:16:50.420 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.437 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46b998b3-9d78-4121-b872-3605ec9d2288]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.471 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[39b2c249-2616-43be-893b-aafc3c8730a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.478 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68ebd278-c0b9-4cae-bc66-976eeaa01cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 NetworkManager[44960]: <info>  [1759407410.4796] manager: (tapdc4336bf-60): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.517 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d686ec85-2007-4ba9-8e91-f1103059d386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.519 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[646c58b7-c7b7-4cf8-9656-876422ec65a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 NetworkManager[44960]: <info>  [1759407410.5436] device (tapdc4336bf-60): carrier: link connected
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.549 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[373e0b12-35b7-4bf9-b402-0407ac99d9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.564 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[88a947e5-aed6-42b5-b6d8-a7071dc7b19e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4336bf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:70:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527410, 'reachable_time': 31983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243909, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8c6628-4575-4e53-86a6-cf9b24369daa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:708b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527410, 'tstamp': 527410}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243910, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.594 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aec9f9ce-5354-42ea-8f02-dd676a631d79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4336bf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:70:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527410, 'reachable_time': 31983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243911, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.623 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[78fecf45-54ad-450b-9bf9-dbe554eb9fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.671 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab90f616-3462-47dd-ae65-6d444ce6badb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4336bf-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc4336bf-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:50 np0005466030 nova_compute[230518]: 2025-10-02 12:16:50.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:50 np0005466030 kernel: tapdc4336bf-60: entered promiscuous mode
Oct  2 08:16:50 np0005466030 NetworkManager[44960]: <info>  [1759407410.6772] manager: (tapdc4336bf-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct  2 08:16:50 np0005466030 nova_compute[230518]: 2025-10-02 12:16:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.681 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdc4336bf-60, col_values=(('external_ids', {'iface-id': 'c67f345b-5542-4cd7-a60b-7617c8d1414e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:16:50 np0005466030 nova_compute[230518]: 2025-10-02 12:16:50.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:50Z|00126|binding|INFO|Releasing lport c67f345b-5542-4cd7-a60b-7617c8d1414e from this chassis (sb_readonly=0)
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.684 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.685 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83d6d6ff-1d62-4d18-b3af-b0984aff2d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.686 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:16:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.687 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'env', 'PROCESS_TAG=haproxy-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:16:50 np0005466030 nova_compute[230518]: 2025-10-02 12:16:50.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:51.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:51 np0005466030 podman[243943]: 2025-10-02 12:16:51.0536642 +0000 UTC m=+0.048741254 container create 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:16:51 np0005466030 systemd[1]: Started libpod-conmon-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7.scope.
Oct  2 08:16:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:51 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:16:51 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ccfcc3d3923f9105c6b44d482785ea52e49c37aea974d395f37f06c9024bca9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:16:51 np0005466030 podman[243943]: 2025-10-02 12:16:51.026747713 +0000 UTC m=+0.021824767 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:16:51 np0005466030 podman[243943]: 2025-10-02 12:16:51.125242231 +0000 UTC m=+0.120319295 container init 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:16:51 np0005466030 podman[243943]: 2025-10-02 12:16:51.130160986 +0000 UTC m=+0.125238040 container start 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:16:51 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : New worker (243964) forked
Oct  2 08:16:51 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : Loading success.
Oct  2 08:16:51 np0005466030 nova_compute[230518]: 2025-10-02 12:16:51.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:52 np0005466030 nova_compute[230518]: 2025-10-02 12:16:52.329 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:16:52 np0005466030 nova_compute[230518]: 2025-10-02 12:16:52.351 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:16:52 np0005466030 nova_compute[230518]: 2025-10-02 12:16:52.378 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:16:52 np0005466030 nova_compute[230518]: 2025-10-02 12:16:52.378 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:16:52 np0005466030 nova_compute[230518]: 2025-10-02 12:16:52.378 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:16:52 np0005466030 nova_compute[230518]: 2025-10-02 12:16:52.382 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct  2 08:16:52 np0005466030 virtqemud[230067]: Domain id=16 name='instance-0000001d' uuid=a114d722-ceac-442e-8b38-c2892fda526b is tainted: custom-monitor
Oct  2 08:16:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:52Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:7b:22 10.100.0.10
Oct  2 08:16:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:16:52Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:7b:22 10.100.0.10
Oct  2 08:16:52 np0005466030 podman[243974]: 2025-10-02 12:16:52.802195274 +0000 UTC m=+0.052209223 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:16:52 np0005466030 podman[243973]: 2025-10-02 12:16:52.83419587 +0000 UTC m=+0.084558160 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:16:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:53.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:53.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:53 np0005466030 nova_compute[230518]: 2025-10-02 12:16:53.389 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct  2 08:16:53 np0005466030 nova_compute[230518]: 2025-10-02 12:16:53.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:54 np0005466030 nova_compute[230518]: 2025-10-02 12:16:54.395 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct  2 08:16:54 np0005466030 nova_compute[230518]: 2025-10-02 12:16:54.399 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:16:54 np0005466030 nova_compute[230518]: 2025-10-02 12:16:54.420 2 DEBUG nova.objects.instance [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:16:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:55.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:56 np0005466030 nova_compute[230518]: 2025-10-02 12:16:56.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:16:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:16:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:16:58 np0005466030 nova_compute[230518]: 2025-10-02 12:16:58.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:16:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:59.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:16:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:16:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:16:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Oct  2 08:17:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:01.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:01 np0005466030 nova_compute[230518]: 2025-10-02 12:17:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:02 np0005466030 podman[244016]: 2025-10-02 12:17:02.811105004 +0000 UTC m=+0.062156816 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:17:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:03.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:03 np0005466030 nova_compute[230518]: 2025-10-02 12:17:03.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:04 np0005466030 podman[244036]: 2025-10-02 12:17:04.795847976 +0000 UTC m=+0.054435244 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 08:17:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:06 np0005466030 nova_compute[230518]: 2025-10-02 12:17:06.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:07.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:08 np0005466030 nova_compute[230518]: 2025-10-02 12:17:08.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Oct  2 08:17:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:11 np0005466030 nova_compute[230518]: 2025-10-02 12:17:11.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:13 np0005466030 nova_compute[230518]: 2025-10-02 12:17:13.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 08:17:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  2 08:17:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  2 08:17:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  2 08:17:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  2 08:17:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  2 08:17:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:15 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  2 08:17:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:15.777 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:15.778 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:17:15 np0005466030 nova_compute[230518]: 2025-10-02 12:17:15.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:16 np0005466030 nova_compute[230518]: 2025-10-02 12:17:16.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:17:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:17.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:17:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:17.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:18 np0005466030 nova_compute[230518]: 2025-10-02 12:17:18.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Oct  2 08:17:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:19.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.212 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.212 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.212 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.213 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.213 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.214 2 INFO nova.compute.manager [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Terminating instance#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.214 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.214 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.215 2 DEBUG nova.network.neutron [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.496 2 DEBUG nova.network.neutron [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.765 2 DEBUG nova.network.neutron [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.787 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:19 np0005466030 nova_compute[230518]: 2025-10-02 12:17:19.788 2 DEBUG nova.compute.manager [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:17:19 np0005466030 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct  2 08:17:19 np0005466030 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000018.scope: Consumed 14.504s CPU time.
Oct  2 08:17:19 np0005466030 systemd-machined[188247]: Machine qemu-15-instance-00000018 terminated.
Oct  2 08:17:20 np0005466030 nova_compute[230518]: 2025-10-02 12:17:20.007 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance destroyed successfully.#033[00m
Oct  2 08:17:20 np0005466030 nova_compute[230518]: 2025-10-02 12:17:20.008 2 DEBUG nova.objects.instance [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:21.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.128 2 INFO nova.virt.libvirt.driver [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deleting instance files /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598_del#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.129 2 INFO nova.virt.libvirt.driver [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deletion of /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598_del complete#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.177 2 INFO nova.compute.manager [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 1.39 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.177 2 DEBUG oslo.service.loopingcall [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.177 2 DEBUG nova.compute.manager [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.177 2 DEBUG nova.network.neutron [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.298 2 DEBUG nova.network.neutron [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.318 2 DEBUG nova.network.neutron [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.334 2 INFO nova.compute.manager [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 0.16 seconds to deallocate network for instance.#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.394 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.395 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.494 2 DEBUG oslo_concurrency.processutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3296994374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.941 2 DEBUG oslo_concurrency.processutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.946 2 DEBUG nova.compute.provider_tree [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:21 np0005466030 nova_compute[230518]: 2025-10-02 12:17:21.971 2 DEBUG nova.scheduler.client.report [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:22 np0005466030 nova_compute[230518]: 2025-10-02 12:17:22.010 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:22 np0005466030 nova_compute[230518]: 2025-10-02 12:17:22.065 2 INFO nova.scheduler.client.report [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocations for instance 80f9c3a4-aadc-4519-a451-8ce36d37b598#033[00m
Oct  2 08:17:22 np0005466030 nova_compute[230518]: 2025-10-02 12:17:22.145 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:23.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:23 np0005466030 nova_compute[230518]: 2025-10-02 12:17:23.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:23.780 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:23 np0005466030 podman[244103]: 2025-10-02 12:17:23.81816071 +0000 UTC m=+0.059957557 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:17:23 np0005466030 podman[244102]: 2025-10-02 12:17:23.845059445 +0000 UTC m=+0.088950008 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.040 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.041 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.041 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.041 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.042 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.043 2 INFO nova.compute.manager [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Terminating instance#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.044 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.044 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.045 2 DEBUG nova.network.neutron [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:25.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.234 2 DEBUG nova.network.neutron [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.659 2 DEBUG nova.network.neutron [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.724 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:25 np0005466030 nova_compute[230518]: 2025-10-02 12:17:25.725 2 DEBUG nova.compute.manager [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:17:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:26 np0005466030 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct  2 08:17:26 np0005466030 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000015.scope: Consumed 16.521s CPU time.
Oct  2 08:17:26 np0005466030 systemd-machined[188247]: Machine qemu-12-instance-00000015 terminated.
Oct  2 08:17:26 np0005466030 nova_compute[230518]: 2025-10-02 12:17:26.143 2 INFO nova.virt.libvirt.driver [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance destroyed successfully.#033[00m
Oct  2 08:17:26 np0005466030 nova_compute[230518]: 2025-10-02 12:17:26.143 2 DEBUG nova.objects.instance [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:26 np0005466030 nova_compute[230518]: 2025-10-02 12:17:26.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:27.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:27.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:28 np0005466030 nova_compute[230518]: 2025-10-02 12:17:28.316 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating tmpfile /var/lib/nova/instances/tmpyzl_yqg1 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Oct  2 08:17:28 np0005466030 nova_compute[230518]: 2025-10-02 12:17:28.317 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Oct  2 08:17:28 np0005466030 nova_compute[230518]: 2025-10-02 12:17:28.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:29.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:29 np0005466030 nova_compute[230518]: 2025-10-02 12:17:29.745 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Oct  2 08:17:29 np0005466030 nova_compute[230518]: 2025-10-02 12:17:29.889 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:29 np0005466030 nova_compute[230518]: 2025-10-02 12:17:29.889 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:29 np0005466030 nova_compute[230518]: 2025-10-02 12:17:29.890 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:30 np0005466030 nova_compute[230518]: 2025-10-02 12:17:30.642 2 INFO nova.virt.libvirt.driver [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Deleting instance files /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_del#033[00m
Oct  2 08:17:30 np0005466030 nova_compute[230518]: 2025-10-02 12:17:30.642 2 INFO nova.virt.libvirt.driver [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Deletion of /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_del complete#033[00m
Oct  2 08:17:30 np0005466030 nova_compute[230518]: 2025-10-02 12:17:30.802 2 INFO nova.compute.manager [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Took 5.08 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:17:30 np0005466030 nova_compute[230518]: 2025-10-02 12:17:30.803 2 DEBUG oslo.service.loopingcall [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:17:30 np0005466030 nova_compute[230518]: 2025-10-02 12:17:30.804 2 DEBUG nova.compute.manager [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:17:30 np0005466030 nova_compute[230518]: 2025-10-02 12:17:30.804 2 DEBUG nova.network.neutron [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:17:30 np0005466030 nova_compute[230518]: 2025-10-02 12:17:30.942 2 DEBUG nova.network.neutron [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.007 2 DEBUG nova.network.neutron [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.072 2 INFO nova.compute.manager [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Took 0.27 seconds to deallocate network for instance.#033[00m
Oct  2 08:17:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.100 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:31.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.150 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.151 2 DEBUG os_brick.utils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.152 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.158 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.159 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.162 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.163 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a82ea0ad-5f84-4e8b-9444-3a5814d1f74d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.164 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.170 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.170 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a044d085-39ae-4e01-9682-a0a8b1eef269]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.171 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.180 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.181 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[94ba7346-3515-46fc-a25e-79af8f0aba15]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.182 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[1452b0e4-223c-472c-acc1-19bbd3243669]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.183 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.206 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.211 2 DEBUG os_brick.initiator.connectors.lightos [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.212 2 DEBUG os_brick.initiator.connectors.lightos [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.212 2 DEBUG os_brick.initiator.connectors.lightos [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.212 2 DEBUG os_brick.utils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.332 2 DEBUG oslo_concurrency.processutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3277705734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.752 2 DEBUG oslo_concurrency.processutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.759 2 DEBUG nova.compute.provider_tree [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.794 2 DEBUG nova.scheduler.client.report [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.827 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.879 2 INFO nova.scheduler.client.report [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocations for instance bb6a3b63-8cda-41b6-ac43-6f9d310fad2a#033[00m
Oct  2 08:17:31 np0005466030 nova_compute[230518]: 2025-10-02 12:17:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:32 np0005466030 nova_compute[230518]: 2025-10-02 12:17:32.008 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:33.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.229 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='6a6d4750-7860-4bbf-bba3-50eb20d823fd'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.230 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating instance directory: /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.231 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Ensure instance console log exists: /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.231 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.234 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.236 2 DEBUG nova.virt.libvirt.vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:24Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.236 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.237 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.238 2 DEBUG os_vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7539c03e-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7539c03e-c9, col_values=(('external_ids', {'iface-id': '7539c03e-c932-4473-8d75-729cbed6008a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:5e:ba', 'vm-uuid': 'ecee1ec0-1a8d-4d67-b996-205a942120ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:33 np0005466030 NetworkManager[44960]: <info>  [1759407453.2504] manager: (tap7539c03e-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.258 2 INFO os_vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.261 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.261 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='6a6d4750-7860-4bbf-bba3-50eb20d823fd'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.419 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.420 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.420 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.420 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.421 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.422 2 INFO nova.compute.manager [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Terminating instance#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.422 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.423 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.423 2 DEBUG nova.network.neutron [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.602 2 DEBUG nova.network.neutron [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:33 np0005466030 podman[244198]: 2025-10-02 12:17:33.804208495 +0000 UTC m=+0.055006861 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.865 2 DEBUG nova.network.neutron [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.903 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:33 np0005466030 nova_compute[230518]: 2025-10-02 12:17:33.903 2 DEBUG nova.compute.manager [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:17:34 np0005466030 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct  2 08:17:34 np0005466030 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Consumed 19.691s CPU time.
Oct  2 08:17:34 np0005466030 systemd-machined[188247]: Machine qemu-9-instance-00000012 terminated.
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.052592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454052632, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2491, "num_deletes": 265, "total_data_size": 5694580, "memory_usage": 5789488, "flush_reason": "Manual Compaction"}
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454077190, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3692782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25376, "largest_seqno": 27862, "table_properties": {"data_size": 3682594, "index_size": 6426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21956, "raw_average_key_size": 20, "raw_value_size": 3661730, "raw_average_value_size": 3438, "num_data_blocks": 280, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407277, "oldest_key_time": 1759407277, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 24641 microseconds, and 9456 cpu microseconds.
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.077233) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3692782 bytes OK
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.077251) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.079338) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.079383) EVENT_LOG_v1 {"time_micros": 1759407454079375, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.079405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5683262, prev total WAL file size 5683262, number of live WAL files 2.
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.080654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3606KB)], [51(8842KB)]
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454080704, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12747693, "oldest_snapshot_seqno": -1}
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.123 2 INFO nova.virt.libvirt.driver [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance destroyed successfully.#033[00m
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.124 2 DEBUG nova.objects.instance [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5400 keys, 12630939 bytes, temperature: kUnknown
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454142641, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 12630939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12590091, "index_size": 26274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 135692, "raw_average_key_size": 25, "raw_value_size": 12488125, "raw_average_value_size": 2312, "num_data_blocks": 1084, "num_entries": 5400, "num_filter_entries": 5400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.142915) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12630939 bytes
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.144190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.4 rd, 203.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.6 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(6.9) write-amplify(3.4) OK, records in: 5945, records dropped: 545 output_compression: NoCompression
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.144207) EVENT_LOG_v1 {"time_micros": 1759407454144200, "job": 30, "event": "compaction_finished", "compaction_time_micros": 62068, "compaction_time_cpu_micros": 31608, "output_level": 6, "num_output_files": 1, "total_output_size": 12630939, "num_input_records": 5945, "num_output_records": 5400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454144903, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454146443, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.080590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.318 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Port 7539c03e-c932-4473-8d75-729cbed6008a updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.682 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='6a6d4750-7860-4bbf-bba3-50eb20d823fd'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Oct  2 08:17:34 np0005466030 kernel: tap7539c03e-c9: entered promiscuous mode
Oct  2 08:17:34 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:34Z|00127|binding|INFO|Claiming lport 7539c03e-c932-4473-8d75-729cbed6008a for this additional chassis.
Oct  2 08:17:34 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:34Z|00128|binding|INFO|7539c03e-c932-4473-8d75-729cbed6008a: Claiming fa:16:3e:0e:5e:ba 10.100.0.9
Oct  2 08:17:34 np0005466030 NetworkManager[44960]: <info>  [1759407454.8993] manager: (tap7539c03e-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:34 np0005466030 systemd-udevd[244220]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:17:34 np0005466030 NetworkManager[44960]: <info>  [1759407454.9141] device (tap7539c03e-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:17:34 np0005466030 NetworkManager[44960]: <info>  [1759407454.9149] device (tap7539c03e-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:17:34 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:34Z|00129|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a ovn-installed in OVS
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:34 np0005466030 nova_compute[230518]: 2025-10-02 12:17:34.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:34 np0005466030 systemd-machined[188247]: New machine qemu-17-instance-0000001f.
Oct  2 08:17:34 np0005466030 systemd[1]: Started Virtual Machine qemu-17-instance-0000001f.
Oct  2 08:17:34 np0005466030 podman[244242]: 2025-10-02 12:17:34.947962699 +0000 UTC m=+0.075593228 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:17:35 np0005466030 nova_compute[230518]: 2025-10-02 12:17:35.007 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407440.0053596, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:35 np0005466030 nova_compute[230518]: 2025-10-02 12:17:35.007 2 INFO nova.compute.manager [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:17:35 np0005466030 nova_compute[230518]: 2025-10-02 12:17:35.026 2 DEBUG nova.compute.manager [None req-bb86c0aa-08d7-4464-a05b-b3a66c7090a3 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:35.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.132 2 INFO nova.virt.libvirt.driver [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Deleting instance files /var/lib/nova/instances/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e_del#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.134 2 INFO nova.virt.libvirt.driver [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Deletion of /var/lib/nova/instances/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e_del complete#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.182 2 INFO nova.compute.manager [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Took 2.28 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.183 2 DEBUG oslo.service.loopingcall [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.183 2 DEBUG nova.compute.manager [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.183 2 DEBUG nova.network.neutron [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.348 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407456.348626, ecee1ec0-1a8d-4d67-b996-205a942120ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.349 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Started (Lifecycle Event)#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.367 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.424 2 DEBUG nova.network.neutron [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.438 2 DEBUG nova.network.neutron [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.449 2 INFO nova.compute.manager [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Took 0.27 seconds to deallocate network for instance.#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.491 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.491 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.577 2 DEBUG oslo_concurrency.processutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.773 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407456.7732575, ecee1ec0-1a8d-4d67-b996-205a942120ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.774 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.829 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.834 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.863 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:17:36 np0005466030 nova_compute[230518]: 2025-10-02 12:17:36.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:37 np0005466030 nova_compute[230518]: 2025-10-02 12:17:37.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2269807714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:37 np0005466030 nova_compute[230518]: 2025-10-02 12:17:37.087 2 DEBUG oslo_concurrency.processutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:37 np0005466030 nova_compute[230518]: 2025-10-02 12:17:37.092 2 DEBUG nova.compute.provider_tree [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:37 np0005466030 nova_compute[230518]: 2025-10-02 12:17:37.115 2 DEBUG nova.scheduler.client.report [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:37.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:37 np0005466030 nova_compute[230518]: 2025-10-02 12:17:37.149 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:37 np0005466030 nova_compute[230518]: 2025-10-02 12:17:37.204 2 INFO nova.scheduler.client.report [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocations for instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e#033[00m
Oct  2 08:17:37 np0005466030 nova_compute[230518]: 2025-10-02 12:17:37.300 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:37 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:37Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:5e:ba 10.100.0.9
Oct  2 08:17:37 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:37Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:5e:ba 10.100.0.9
Oct  2 08:17:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.081 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.120 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.120 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.344017) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458344048, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 299, "num_deletes": 251, "total_data_size": 124753, "memory_usage": 130960, "flush_reason": "Manual Compaction"}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458347839, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 81647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27867, "largest_seqno": 28161, "table_properties": {"data_size": 79720, "index_size": 155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5039, "raw_average_key_size": 18, "raw_value_size": 75924, "raw_average_value_size": 277, "num_data_blocks": 7, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407454, "oldest_key_time": 1759407454, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 3865 microseconds, and 737 cpu microseconds.
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.347883) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 81647 bytes OK
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.347895) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349070) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349081) EVENT_LOG_v1 {"time_micros": 1759407458349078, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349092) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 122553, prev total WAL file size 122553, number of live WAL files 2.
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349363) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(79KB)], [54(12MB)]
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458349396, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 12712586, "oldest_snapshot_seqno": -1}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5165 keys, 10806373 bytes, temperature: kUnknown
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458401177, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10806373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10768667, "index_size": 23708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 131512, "raw_average_key_size": 25, "raw_value_size": 10672338, "raw_average_value_size": 2066, "num_data_blocks": 968, "num_entries": 5165, "num_filter_entries": 5165, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.401412) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10806373 bytes
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.402548) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.2 rd, 208.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(288.1) write-amplify(132.4) OK, records in: 5674, records dropped: 509 output_compression: NoCompression
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.402564) EVENT_LOG_v1 {"time_micros": 1759407458402556, "job": 32, "event": "compaction_finished", "compaction_time_micros": 51854, "compaction_time_cpu_micros": 22066, "output_level": 6, "num_output_files": 1, "total_output_size": 10806373, "num_input_records": 5674, "num_output_records": 5165, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458402663, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458404543, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1591436883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.609 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.680 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.681 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.687 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:17:38 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:38Z|00130|binding|INFO|Claiming lport 7539c03e-c932-4473-8d75-729cbed6008a for this chassis.
Oct  2 08:17:38 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:38Z|00131|binding|INFO|7539c03e-c932-4473-8d75-729cbed6008a: Claiming fa:16:3e:0e:5e:ba 10.100.0.9
Oct  2 08:17:38 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:38Z|00132|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a up in Southbound
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.699 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.701 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e bound to our chassis#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.703 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.719 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[16b90fff-d261-475b-a461-464777dbc4d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.750 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2903ffff-a59b-496d-8756-b9c23dea9bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.753 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4db940bc-cc91-4347-aa1b-6df38ff7923f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.783 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[01c6e657-5199-471d-95ac-77b1ff084260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.801 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1d5777-70ef-4b05-b0cc-f39722850c19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244375, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.815 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b2ab2c-dc3e-4a25-bdc9-f0d2d2c8191c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527331, 'tstamp': 527331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244376, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527333, 'tstamp': 527333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244376, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.823 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.823 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.875 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.876 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4374MB free_disk=20.893749237060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.876 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.876 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.959 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Migration for instance ecee1ec0-1a8d-4d67-b996-205a942120ae refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.999 2 INFO nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating resource usage from migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b#033[00m
Oct  2 08:17:38 np0005466030 nova_compute[230518]: 2025-10-02 12:17:38.999 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Starting to track incoming migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.042 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a114d722-ceac-442e-8b38-c2892fda526b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.067 2 WARNING nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ecee1ec0-1a8d-4d67-b996-205a942120ae has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.068 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.068 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.071 2 INFO nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Post operation of migration started#033[00m
Oct  2 08:17:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.132 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:17:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:39.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.464 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.465 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.466 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:17:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:17:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3073071398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.612 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.618 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.639 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.663 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:17:39 np0005466030 nova_compute[230518]: 2025-10-02 12:17:39.664 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:40 np0005466030 nova_compute[230518]: 2025-10-02 12:17:40.636 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:40 np0005466030 nova_compute[230518]: 2025-10-02 12:17:40.637 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:17:40 np0005466030 nova_compute[230518]: 2025-10-02 12:17:40.670 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:17:40 np0005466030 nova_compute[230518]: 2025-10-02 12:17:40.671 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:40 np0005466030 nova_compute[230518]: 2025-10-02 12:17:40.671 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:40 np0005466030 nova_compute[230518]: 2025-10-02 12:17:40.671 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:40 np0005466030 nova_compute[230518]: 2025-10-02 12:17:40.672 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.031 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.056 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.070 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.070 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.070 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.073 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Oct  2 08:17:41 np0005466030 virtqemud[230067]: Domain id=17 name='instance-0000001f' uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae is tainted: custom-monitor
Oct  2 08:17:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:41.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.142 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407446.1418507, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.143 2 INFO nova.compute.manager [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:17:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:41.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.159 2 DEBUG nova.compute.manager [None req-44e67298-ad60-499d-8720-3702e30495ce - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:41 np0005466030 nova_compute[230518]: 2025-10-02 12:17:41.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:42 np0005466030 nova_compute[230518]: 2025-10-02 12:17:42.079 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Oct  2 08:17:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:43 np0005466030 nova_compute[230518]: 2025-10-02 12:17:43.085 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Oct  2 08:17:43 np0005466030 nova_compute[230518]: 2025-10-02 12:17:43.089 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:43 np0005466030 nova_compute[230518]: 2025-10-02 12:17:43.114 2 DEBUG nova.objects.instance [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:17:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:43.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:43 np0005466030 nova_compute[230518]: 2025-10-02 12:17:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:17:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:17:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:46 np0005466030 nova_compute[230518]: 2025-10-02 12:17:46.631 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Check if temp file /var/lib/nova/instances/tmp10f21u5k exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct  2 08:17:46 np0005466030 nova_compute[230518]: 2025-10-02 12:17:46.631 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct  2 08:17:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:46 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:46 np0005466030 nova_compute[230518]: 2025-10-02 12:17:46.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:47.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:47.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:17:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:17:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:17:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:48 np0005466030 nova_compute[230518]: 2025-10-02 12:17:48.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:49 np0005466030 nova_compute[230518]: 2025-10-02 12:17:49.122 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407454.1208904, fcfe251b-73c3-4310-b646-3c6c0a8c7e6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:49 np0005466030 nova_compute[230518]: 2025-10-02 12:17:49.122 2 INFO nova.compute.manager [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:17:49 np0005466030 nova_compute[230518]: 2025-10-02 12:17:49.144 2 DEBUG nova.compute.manager [None req-8a2b226e-ccaf-4f7b-bdd3-ff8a1bfa3cf1 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:49.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:17:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355568132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:17:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:51.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:51.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:51 np0005466030 nova_compute[230518]: 2025-10-02 12:17:51.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:52 np0005466030 nova_compute[230518]: 2025-10-02 12:17:52.290 2 DEBUG nova.compute.manager [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:52 np0005466030 nova_compute[230518]: 2025-10-02 12:17:52.291 2 DEBUG oslo_concurrency.lockutils [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:52 np0005466030 nova_compute[230518]: 2025-10-02 12:17:52.291 2 DEBUG oslo_concurrency.lockutils [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:52 np0005466030 nova_compute[230518]: 2025-10-02 12:17:52.291 2 DEBUG oslo_concurrency.lockutils [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:52 np0005466030 nova_compute[230518]: 2025-10-02 12:17:52.292 2 DEBUG nova.compute.manager [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:52 np0005466030 nova_compute[230518]: 2025-10-02 12:17:52.292 2 DEBUG nova.compute.manager [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:17:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:53.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:17:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:53.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.207 2 INFO nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 5.62 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.208 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.225 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(7d05ad01-db02-4010-91c3-110015cf1810),old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='ff6e5128-91a9-4273-b9c7-d3a4c775f1fa'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.228 2 DEBUG nova.objects.instance [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lazy-loading 'migration_context' on Instance uuid ecee1ec0-1a8d-4d67-b996-205a942120ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.229 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.230 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.231 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.248 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Find same serial number: pos=1, serial=20a19061-0239-43b4-b9d7-980e7acde072 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.250 2 DEBUG nova.virt.libvirt.vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:43Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.250 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.251 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.251 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating guest XML with vif config: <interface type="ethernet">
Oct  2 08:17:53 np0005466030 nova_compute[230518]:  <mac address="fa:16:3e:0e:5e:ba"/>
Oct  2 08:17:53 np0005466030 nova_compute[230518]:  <model type="virtio"/>
Oct  2 08:17:53 np0005466030 nova_compute[230518]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:17:53 np0005466030 nova_compute[230518]:  <mtu size="1442"/>
Oct  2 08:17:53 np0005466030 nova_compute[230518]:  <target dev="tap7539c03e-c9"/>
Oct  2 08:17:53 np0005466030 nova_compute[230518]: </interface>
Oct  2 08:17:53 np0005466030 nova_compute[230518]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.252 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.733 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.734 2 INFO nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct  2 08:17:53 np0005466030 nova_compute[230518]: 2025-10-02 12:17:53.864 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.367 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.368 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.409 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.410 2 WARNING nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-changed-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing instance network info cache due to event network-changed-7539c03e-c932-4473-8d75-729cbed6008a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG nova.network.neutron [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:17:54 np0005466030 podman[244531]: 2025-10-02 12:17:54.801667318 +0000 UTC m=+0.048823716 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 08:17:54 np0005466030 podman[244530]: 2025-10-02 12:17:54.822935007 +0000 UTC m=+0.071094827 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.870 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:54 np0005466030 nova_compute[230518]: 2025-10-02 12:17:54.871 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:17:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:55.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:55 np0005466030 nova_compute[230518]: 2025-10-02 12:17:55.374 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:55 np0005466030 nova_compute[230518]: 2025-10-02 12:17:55.374 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:17:55 np0005466030 nova_compute[230518]: 2025-10-02 12:17:55.876 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:55 np0005466030 nova_compute[230518]: 2025-10-02 12:17:55.877 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.090 2 DEBUG nova.network.neutron [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updated VIF entry in instance network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.091 2 DEBUG nova.network.neutron [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.112 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.380 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.381 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.437 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407476.4370553, ecee1ec0-1a8d-4d67-b996-205a942120ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.438 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.672 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:17:56 np0005466030 kernel: tap7539c03e-c9 (unregistering): left promiscuous mode
Oct  2 08:17:56 np0005466030 NetworkManager[44960]: <info>  [1759407476.8235] device (tap7539c03e-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:17:56 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:56Z|00133|binding|INFO|Releasing lport 7539c03e-c932-4473-8d75-729cbed6008a from this chassis (sb_readonly=0)
Oct  2 08:17:56 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:56Z|00134|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a down in Southbound
Oct  2 08:17:56 np0005466030 ovn_controller[129257]: 2025-10-02T12:17:56Z|00135|binding|INFO|Removing iface tap7539c03e-c9 ovn-installed in OVS
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.840 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'b9588630-ee40-495c-89d2-4219f6b0f0b5'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '18', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.841 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.843 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.859 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47d5b9ae-e0c9-428e-ac4c-cf3a3416963b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.887 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[61240be7-d5d3-4881-b34c-9b4ef0428581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.890 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b145f7df-3425-4182-b571-3d82fc3c2c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466030 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  2 08:17:56 np0005466030 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001f.scope: Consumed 4.298s CPU time.
Oct  2 08:17:56 np0005466030 systemd-machined[188247]: Machine qemu-17-instance-0000001f terminated.
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.920 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[98bdb7bf-a529-4b95-a587-0f526d839ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.939 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d445c1bc-1369-4058-9748-095a5d571881]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1378, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1378, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244637, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.956 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[caf6b109-def9-415b-bed5-48f97212ea2d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527331, 'tstamp': 527331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244638, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527333, 'tstamp': 527333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244638, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.957 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:17:56 np0005466030 nova_compute[230518]: 2025-10-02 12:17:56.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:57 np0005466030 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-20a19061-0239-43b4-b9d7-980e7acde072: No such file or directory
Oct  2 08:17:57 np0005466030 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-20a19061-0239-43b4-b9d7-980e7acde072: No such file or directory
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.065 2 DEBUG nova.virt.libvirt.guest [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.066 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation has completed#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.066 2 INFO nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] _post_live_migration() is started..#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.068 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.068 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.068 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct  2 08:17:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:57.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:57.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG nova.compute.manager [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG oslo_concurrency.lockutils [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG oslo_concurrency.lockutils [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG oslo_concurrency.lockutils [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG nova.compute.manager [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:57 np0005466030 nova_compute[230518]: 2025-10-02 12:17:57.323 2 DEBUG nova.compute.manager [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:17:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.655 2 DEBUG nova.network.neutron [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Activated binding for port 7539c03e-c932-4473-8d75-729cbed6008a and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.656 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.656 2 DEBUG nova.virt.libvirt.vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:46Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.657 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.657 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.657 2 DEBUG os_vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.659 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7539c03e-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.663 2 INFO os_vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.664 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deleting instance files /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del#033[00m
Oct  2 08:17:58 np0005466030 nova_compute[230518]: 2025-10-02 12:17:58.665 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deletion of /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del complete#033[00m
Oct  2 08:17:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:17:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:17:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:17:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:17:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.409 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.410 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.410 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.411 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.412 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.414 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.414 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.415 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:17:59 np0005466030 nova_compute[230518]: 2025-10-02 12:17:59.416 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:18:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:01.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:01 np0005466030 nova_compute[230518]: 2025-10-02 12:18:01.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:03 np0005466030 nova_compute[230518]: 2025-10-02 12:18:03.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.242 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.243 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.244 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.277 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.278 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.278 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.279 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.279 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2412881761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.711 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.806 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.806 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:04 np0005466030 podman[244673]: 2025-10-02 12:18:04.815465863 +0000 UTC m=+0.055425614 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.969 2 WARNING nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.970 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4571MB free_disk=20.87615966796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.971 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:04 np0005466030 nova_compute[230518]: 2025-10-02 12:18:04.971 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.027 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration for instance ecee1ec0-1a8d-4d67-b996-205a942120ae refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.057 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.115 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Instance a114d722-ceac-442e-8b38-c2892fda526b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.116 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration 7d05ad01-db02-4010-91c3-110015cf1810 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.116 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.116 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:18:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:05.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:18:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:18:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:18:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:18:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.207 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1316808026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.631 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.636 2 DEBUG nova.compute.provider_tree [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.669 2 DEBUG nova.scheduler.client.report [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.693 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.693 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.700 2 INFO nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:18:05 np0005466030 podman[244717]: 2025-10-02 12:18:05.793306757 +0000 UTC m=+0.052520962 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.896 2 INFO nova.scheduler.client.report [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Deleted allocation for migration 7d05ad01-db02-4010-91c3-110015cf1810#033[00m
Oct  2 08:18:05 np0005466030 nova_compute[230518]: 2025-10-02 12:18:05.896 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:18:06 np0005466030 nova_compute[230518]: 2025-10-02 12:18:06.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:08 np0005466030 nova_compute[230518]: 2025-10-02 12:18:08.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:09.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:11.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.286 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.287 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.287 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.288 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.288 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.289 2 INFO nova.compute.manager [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Terminating instance#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.290 2 DEBUG nova.compute.manager [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:18:11 np0005466030 kernel: tap965edc3f-df (unregistering): left promiscuous mode
Oct  2 08:18:11 np0005466030 NetworkManager[44960]: <info>  [1759407491.3448] device (tap965edc3f-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00136|binding|INFO|Releasing lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de from this chassis (sb_readonly=0)
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00137|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de down in Southbound
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00138|binding|INFO|Releasing lport 92466114-86f5-4a18-ad64-93c2127fe0d3 from this chassis (sb_readonly=0)
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00139|binding|INFO|Setting lport 92466114-86f5-4a18-ad64-93c2127fe0d3 down in Southbound
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00140|binding|INFO|Removing iface tap965edc3f-df ovn-installed in OVS
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.359 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.360 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.361 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis#033[00m
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00141|binding|INFO|Releasing lport c67f345b-5542-4cd7-a60b-7617c8d1414e from this chassis (sb_readonly=0)
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00142|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.363 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.364 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec11b62-840a-4103-be2a-6248ec5a0456]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.364 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace which is not needed anymore#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct  2 08:18:11 np0005466030 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Consumed 6.718s CPU time.
Oct  2 08:18:11 np0005466030 systemd-machined[188247]: Machine qemu-16-instance-0000001d terminated.
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : haproxy version is 2.8.14-c23fe91
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : path to executable is /usr/sbin/haproxy
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [WARNING]  (243888) : Exiting Master process...
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [WARNING]  (243888) : Exiting Master process...
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [ALERT]    (243888) : Current worker (243890) exited with code 143 (Terminated)
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [WARNING]  (243888) : All workers exited. Exiting... (0)
Oct  2 08:18:11 np0005466030 systemd[1]: libpod-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6.scope: Deactivated successfully.
Oct  2 08:18:11 np0005466030 podman[244761]: 2025-10-02 12:18:11.493508535 +0000 UTC m=+0.046803392 container died 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:11 np0005466030 kernel: tap965edc3f-df: entered promiscuous mode
Oct  2 08:18:11 np0005466030 systemd-udevd[244740]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00143|binding|INFO|Claiming lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de for this chassis.
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00144|binding|INFO|965edc3f-df96-430d-8b4b-4f3dbb19e9de: Claiming fa:16:3e:bf:7b:22 10.100.0.10
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00145|binding|INFO|Claiming lport 92466114-86f5-4a18-ad64-93c2127fe0d3 for this chassis.
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00146|binding|INFO|92466114-86f5-4a18-ad64-93c2127fe0d3: Claiming fa:16:3e:bd:c1:f4 19.80.0.104
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 NetworkManager[44960]: <info>  [1759407491.5110] manager: (tap965edc3f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Oct  2 08:18:11 np0005466030 kernel: tap965edc3f-df (unregistering): left promiscuous mode
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.528 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:11 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.531 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:11 np0005466030 systemd[1]: var-lib-containers-storage-overlay-8333f9bed9a3ff266e399a2d551e934a36972a2ab5ffe8727ce87e882998249b-merged.mount: Deactivated successfully.
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.539 2 INFO nova.virt.libvirt.driver [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Instance destroyed successfully.#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.539 2 DEBUG nova.objects.instance [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lazy-loading 'resources' on Instance uuid a114d722-ceac-442e-8b38-c2892fda526b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:11 np0005466030 podman[244761]: 2025-10-02 12:18:11.548428843 +0000 UTC m=+0.101723680 container cleanup 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:18:11 np0005466030 systemd[1]: libpod-conmon-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6.scope: Deactivated successfully.
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.562 2 DEBUG nova.virt.libvirt.vif [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:54Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.562 2 DEBUG nova.network.os_vif_util [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.563 2 DEBUG nova.network.os_vif_util [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.563 2 DEBUG os_vif [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap965edc3f-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00147|binding|INFO|Releasing lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de from this chassis (sb_readonly=0)
Oct  2 08:18:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:11Z|00148|binding|INFO|Releasing lport 92466114-86f5-4a18-ad64-93c2127fe0d3 from this chassis (sb_readonly=0)
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.575 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.578 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.619 2 INFO os_vif [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df')#033[00m
Oct  2 08:18:11 np0005466030 podman[244793]: 2025-10-02 12:18:11.625202587 +0000 UTC m=+0.047143613 container remove 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.641 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[66480642-1041-447e-b42f-aa35e2ef6aed]: (4, ('Thu Oct  2 12:18:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6)\n145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6\nThu Oct  2 12:18:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6)\n145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.643 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2bf365-ffb2-4a15-92a6-22488c885f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.644 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 kernel: tap5989958f-c0: left promiscuous mode
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.664 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfbed1b-f4de-410a-a7d2-0218d01215de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.675 2 DEBUG nova.compute.manager [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.675 2 DEBUG oslo_concurrency.lockutils [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG oslo_concurrency.lockutils [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG oslo_concurrency.lockutils [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG nova.compute.manager [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG nova.compute.manager [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.701 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ff08240c-ba03-438d-9cb2-1b01e6de85cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.702 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[190b4af4-d912-404e-aefd-deff087d429b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.720 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[490eb88e-a9a4-43bf-8aa8-6907c2414bb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527313, 'reachable_time': 21698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244826, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 systemd[1]: run-netns-ovnmeta\x2d5989958f\x2dccbb\x2d4db4\x2d8dcb\x2d18563aa2418e.mount: Deactivated successfully.
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.723 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.723 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa51121-a67b-4068-b9f8-d2d12470af46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.725 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.727 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c08fc7-575d-44a6-bb2c-518cf2a0ef3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.728 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe namespace which is not needed anymore#033[00m
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : haproxy version is 2.8.14-c23fe91
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : path to executable is /usr/sbin/haproxy
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [WARNING]  (243962) : Exiting Master process...
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [WARNING]  (243962) : Exiting Master process...
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [ALERT]    (243962) : Current worker (243964) exited with code 143 (Terminated)
Oct  2 08:18:11 np0005466030 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [WARNING]  (243962) : All workers exited. Exiting... (0)
Oct  2 08:18:11 np0005466030 systemd[1]: libpod-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7.scope: Deactivated successfully.
Oct  2 08:18:11 np0005466030 podman[244844]: 2025-10-02 12:18:11.859406813 +0000 UTC m=+0.046211384 container died 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:11 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7-userdata-shm.mount: Deactivated successfully.
Oct  2 08:18:11 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7ccfcc3d3923f9105c6b44d482785ea52e49c37aea974d395f37f06c9024bca9-merged.mount: Deactivated successfully.
Oct  2 08:18:11 np0005466030 podman[244844]: 2025-10-02 12:18:11.888672983 +0000 UTC m=+0.075477564 container cleanup 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:11 np0005466030 systemd[1]: libpod-conmon-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7.scope: Deactivated successfully.
Oct  2 08:18:11 np0005466030 podman[244872]: 2025-10-02 12:18:11.946539904 +0000 UTC m=+0.039476053 container remove 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.953 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0a89c6-25d1-4364-adb1-6d3a5d15f58e]: (4, ('Thu Oct  2 12:18:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe (21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7)\n21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7\nThu Oct  2 12:18:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe (21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7)\n21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.954 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed0057c-29b5-48b4-be79-d9566bd2cd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.955 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4336bf-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 kernel: tapdc4336bf-60: left promiscuous mode
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 nova_compute[230518]: 2025-10-02 12:18:11.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.980 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a18b137-ec16-43fe-a7b4-597d33d3024a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.014 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[288dff4b-f37d-4a7d-8b04-e3ea74d9b318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.015 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[422ad25c-4d3f-4be9-98e0-542762beceb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.030 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[20b6311c-77b9-4269-b345-12f7d1b91216]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527402, 'reachable_time': 35441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244888, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.031 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.031 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[16dcff14-3d58-4ff5-9a09-34774be44bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.032 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.033 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.034 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c84efdce-c708-45a6-ad8b-b5ae5a8cc22a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.034 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.035 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.035 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[be106de5-cb63-47ce-acc9-bba7b4117ca1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.036 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.037 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.037 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e69869be-fcad-4ecc-ac7f-c95bb0daa100]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.038 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.039 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.039 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ba9f0b-4c9c-45b9-8aab-2042787fc859]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.068 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407477.064609, ecee1ec0-1a8d-4d67-b996-205a942120ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.068 2 INFO nova.compute.manager [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.087 2 DEBUG nova.compute.manager [None req-0096b992-debf-4856-9070-794a82b6a69c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.270 2 INFO nova.virt.libvirt.driver [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deleting instance files /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b_del#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.271 2 INFO nova.virt.libvirt.driver [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deletion of /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b_del complete#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.319 2 INFO nova.compute.manager [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.319 2 DEBUG oslo.service.loopingcall [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.319 2 DEBUG nova.compute.manager [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:18:12 np0005466030 nova_compute[230518]: 2025-10-02 12:18:12.320 2 DEBUG nova.network.neutron [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:18:12 np0005466030 systemd[1]: run-netns-ovnmeta\x2ddc4336bf\x2d639d\x2d45a4\x2d88f2\x2d32f0af1b9dbe.mount: Deactivated successfully.
Oct  2 08:18:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:13.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:13 np0005466030 nova_compute[230518]: 2025-10-02 12:18:13.773 2 DEBUG nova.compute.manager [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:13 np0005466030 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG oslo_concurrency.lockutils [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:13 np0005466030 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG oslo_concurrency.lockutils [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:13 np0005466030 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG oslo_concurrency.lockutils [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:13 np0005466030 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG nova.compute.manager [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:13 np0005466030 nova_compute[230518]: 2025-10-02 12:18:13.775 2 WARNING nova.compute.manager [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received unexpected event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:18:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:15.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:15.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:16 np0005466030 nova_compute[230518]: 2025-10-02 12:18:16.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:16.740 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:16 np0005466030 nova_compute[230518]: 2025-10-02 12:18:16.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:16.742 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:18:16 np0005466030 nova_compute[230518]: 2025-10-02 12:18:16.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:17.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:17 np0005466030 nova_compute[230518]: 2025-10-02 12:18:17.697 2 DEBUG nova.network.neutron [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:17 np0005466030 nova_compute[230518]: 2025-10-02 12:18:17.719 2 INFO nova.compute.manager [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Took 5.40 seconds to deallocate network for instance.#033[00m
Oct  2 08:18:17 np0005466030 nova_compute[230518]: 2025-10-02 12:18:17.796 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:17 np0005466030 nova_compute[230518]: 2025-10-02 12:18:17.797 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:17 np0005466030 nova_compute[230518]: 2025-10-02 12:18:17.859 2 DEBUG oslo_concurrency.processutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/914543213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:18 np0005466030 nova_compute[230518]: 2025-10-02 12:18:18.286 2 DEBUG oslo_concurrency.processutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:18 np0005466030 nova_compute[230518]: 2025-10-02 12:18:18.291 2 DEBUG nova.compute.provider_tree [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:18:18 np0005466030 nova_compute[230518]: 2025-10-02 12:18:18.352 2 DEBUG nova.scheduler.client.report [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:18:18 np0005466030 nova_compute[230518]: 2025-10-02 12:18:18.454 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:18 np0005466030 nova_compute[230518]: 2025-10-02 12:18:18.500 2 INFO nova.scheduler.client.report [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Deleted allocations for instance a114d722-ceac-442e-8b38-c2892fda526b#033[00m
Oct  2 08:18:18 np0005466030 nova_compute[230518]: 2025-10-02 12:18:18.986 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:18:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:18:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:19.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:21.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:21.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:21 np0005466030 nova_compute[230518]: 2025-10-02 12:18:21.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:21 np0005466030 nova_compute[230518]: 2025-10-02 12:18:21.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.540 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.541 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.572 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.655 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.656 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.664 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.664 2 INFO nova.compute.claims [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:18:22 np0005466030 nova_compute[230518]: 2025-10-02 12:18:22.806 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:23.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/967908571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.277 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.283 2 DEBUG nova.compute.provider_tree [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.349 2 DEBUG nova.scheduler.client.report [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.537 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.537 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.720 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.720 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.825 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:18:23 np0005466030 nova_compute[230518]: 2025-10-02 12:18:23.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.158 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.357 2 DEBUG nova.policy [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed8b6a2129742dfb3b8a0d9f044ac24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.604 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.606 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.606 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Creating image(s)#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.638 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.669 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.697 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.701 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.766 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.767 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.767 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.767 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.797 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:24 np0005466030 nova_compute[230518]: 2025-10-02 12:18:24.802 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:18:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:18:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:18:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.080 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.143 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] resizing rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:18:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:25.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:25.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.236 2 DEBUG nova.objects.instance [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'migration_context' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.282 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.283 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Ensure instance console log exists: /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.283 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.283 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.284 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:25 np0005466030 nova_compute[230518]: 2025-10-02 12:18:25.500 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Successfully created port: 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:18:25 np0005466030 podman[245101]: 2025-10-02 12:18:25.804050607 +0000 UTC m=+0.053847135 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:18:25 np0005466030 podman[245100]: 2025-10-02 12:18:25.852734208 +0000 UTC m=+0.105932322 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:18:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:26 np0005466030 nova_compute[230518]: 2025-10-02 12:18:26.537 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407491.5365417, a114d722-ceac-442e-8b38-c2892fda526b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:26 np0005466030 nova_compute[230518]: 2025-10-02 12:18:26.538 2 INFO nova.compute.manager [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:18:26 np0005466030 nova_compute[230518]: 2025-10-02 12:18:26.572 2 DEBUG nova.compute.manager [None req-d5736351-64e4-4954-ad7f-f3781d793ace - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:26 np0005466030 nova_compute[230518]: 2025-10-02 12:18:26.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:26.744 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:26 np0005466030 nova_compute[230518]: 2025-10-02 12:18:26.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:27 np0005466030 nova_compute[230518]: 2025-10-02 12:18:27.812 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Successfully updated port: 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:18:27 np0005466030 nova_compute[230518]: 2025-10-02 12:18:27.841 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:27 np0005466030 nova_compute[230518]: 2025-10-02 12:18:27.842 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:27 np0005466030 nova_compute[230518]: 2025-10-02 12:18:27.842 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:18:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:28 np0005466030 nova_compute[230518]: 2025-10-02 12:18:28.261 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:18:28 np0005466030 nova_compute[230518]: 2025-10-02 12:18:28.326 2 DEBUG nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:28 np0005466030 nova_compute[230518]: 2025-10-02 12:18:28.327 2 DEBUG nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing instance network info cache due to event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:18:28 np0005466030 nova_compute[230518]: 2025-10-02 12:18:28.327 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.370 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.428 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.429 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance network_info: |[{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.429 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.429 2 DEBUG nova.network.neutron [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.432 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start _get_guest_xml network_info=[{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.436 2 WARNING nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.442 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.443 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.446 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.446 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.447 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.450 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.450 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.452 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:18:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/885113874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.899 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.930 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:29 np0005466030 nova_compute[230518]: 2025-10-02 12:18:29.935 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:18:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3202358119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.383 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.385 2 DEBUG nova.virt.libvirt.vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:24Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.385 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.386 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.387 2 DEBUG nova.objects.instance [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.405 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <uuid>4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</uuid>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <name>instance-00000023</name>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <nova:name>tempest-SecurityGroupsTestJSON-server-488412587</nova:name>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:18:29</nova:creationTime>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:user uuid="2ed8b6a2129742dfb3b8a0d9f044ac24">tempest-SecurityGroupsTestJSON-1241678427-project-member</nova:user>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:project uuid="f0bd0c6232b84d03a010ba8cf85bda46">tempest-SecurityGroupsTestJSON-1241678427</nova:project>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <nova:port uuid="02efafc4-ff2d-47ca-98bd-8e608e9980b8">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <entry name="serial">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <entry name="uuid">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:30:9c:4f"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <target dev="tap02efafc4-ff"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/console.log" append="off"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:18:30 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:18:30 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:18:30 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:18:30 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.407 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Preparing to wait for external event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.407 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.407 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.408 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.408 2 DEBUG nova.virt.libvirt.vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:24Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.409 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.409 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.410 2 DEBUG os_vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02efafc4-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02efafc4-ff, col_values=(('external_ids', {'iface-id': '02efafc4-ff2d-47ca-98bd-8e608e9980b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:9c:4f', 'vm-uuid': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:30 np0005466030 NetworkManager[44960]: <info>  [1759407510.4164] manager: (tap02efafc4-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.421 2 INFO os_vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.474 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.474 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.475 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No VIF found with MAC fa:16:3e:30:9c:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.475 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Using config drive#033[00m
Oct  2 08:18:30 np0005466030 nova_compute[230518]: 2025-10-02 12:18:30.502 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:31.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.297 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Creating config drive at /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.302 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppdf0tw8h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.440 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppdf0tw8h" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.477 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.482 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.515 2 DEBUG nova.network.neutron [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated VIF entry in instance network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.516 2 DEBUG nova.network.neutron [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.539 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.808 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.809 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deleting local config drive /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config because it was imported into RBD.#033[00m
Oct  2 08:18:31 np0005466030 kernel: tap02efafc4-ff: entered promiscuous mode
Oct  2 08:18:31 np0005466030 NetworkManager[44960]: <info>  [1759407511.8605] manager: (tap02efafc4-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Oct  2 08:18:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:31Z|00149|binding|INFO|Claiming lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 for this chassis.
Oct  2 08:18:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:31Z|00150|binding|INFO|02efafc4-ff2d-47ca-98bd-8e608e9980b8: Claiming fa:16:3e:30:9c:4f 10.100.0.5
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.880 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.882 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f bound to our chassis#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.883 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cec9cbfc-5dec-4f85-90c5-6104a054547f#033[00m
Oct  2 08:18:31 np0005466030 systemd-udevd[245277]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:31 np0005466030 systemd-machined[188247]: New machine qemu-18-instance-00000023.
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.895 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1231521a-0ec3-492f-8bc2-9f5aaee42aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.896 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcec9cbfc-51 in ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.898 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcec9cbfc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.898 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebc21ce-8867-41e6-85ce-4e8c6722de2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.899 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cea762-9591-492e-846a-4b9bfec94c9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:31 np0005466030 NetworkManager[44960]: <info>  [1759407511.9029] device (tap02efafc4-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:18:31 np0005466030 NetworkManager[44960]: <info>  [1759407511.9039] device (tap02efafc4-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.909 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[7c55edfb-ff27-47ee-826d-36249fef5b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:31 np0005466030 systemd[1]: Started Virtual Machine qemu-18-instance-00000023.
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.935 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e07d84-1559-4a98-b27d-4a8510581ea8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:31Z|00151|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 ovn-installed in OVS
Oct  2 08:18:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:31Z|00152|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 up in Southbound
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.965 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6701ee-a606-4919-bd1f-7690f3343a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.971 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27d33747-3bf9-43a6-952d-7009e59fe51f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:31 np0005466030 NetworkManager[44960]: <info>  [1759407511.9724] manager: (tapcec9cbfc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Oct  2 08:18:31 np0005466030 nova_compute[230518]: 2025-10-02 12:18:31.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.002 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6652c145-05ec-40a8-ac74-a08214840299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.005 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e83c55d1-2f9a-4e19-9007-95b03c6d6521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 NetworkManager[44960]: <info>  [1759407512.0268] device (tapcec9cbfc-50): carrier: link connected
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.034 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2469d699-de01-499d-aab7-22b10de6ed26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.051 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fa8f4d-32f3-414b-aa82-f8ea79ae2e59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537558, 'reachable_time': 20795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245310, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.067 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4be9e394-22bc-4f54-9f49-c23f156729e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:917'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537558, 'tstamp': 537558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245312, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.090 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f733ae7-2c00-416f-b560-a81dcf226ce6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537558, 'reachable_time': 20795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245313, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a14dd9-318c-4d58-b91b-c2ff38fe5aa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.189 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a93d6237-71dd-4e37-b639-31a87e27590b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.191 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.191 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.191 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcec9cbfc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:32 np0005466030 kernel: tapcec9cbfc-50: entered promiscuous mode
Oct  2 08:18:32 np0005466030 NetworkManager[44960]: <info>  [1759407512.1939] manager: (tapcec9cbfc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.196 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcec9cbfc-50, col_values=(('external_ids', {'iface-id': '7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:32Z|00153|binding|INFO|Releasing lport 7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4 from this chassis (sb_readonly=0)
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.212 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.213 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[866b9cd9-b23e-4977-8d76-cbcc12089757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.213 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:18:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.214 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'env', 'PROCESS_TAG=haproxy-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cec9cbfc-5dec-4f85-90c5-6104a054547f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.394 2 DEBUG nova.compute.manager [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.395 2 DEBUG oslo_concurrency.lockutils [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.395 2 DEBUG oslo_concurrency.lockutils [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.395 2 DEBUG oslo_concurrency.lockutils [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.396 2 DEBUG nova.compute.manager [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Processing event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:18:32 np0005466030 podman[245387]: 2025-10-02 12:18:32.602924992 +0000 UTC m=+0.057416017 container create 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:32 np0005466030 systemd[1]: Started libpod-conmon-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf.scope.
Oct  2 08:18:32 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:18:32 np0005466030 podman[245387]: 2025-10-02 12:18:32.574646102 +0000 UTC m=+0.029137127 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:18:32 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd8cce23eed1d8f8a72aa27661a61a05c7b042f2a5fd87369a7dd98a871dfe3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:18:32 np0005466030 podman[245387]: 2025-10-02 12:18:32.684367773 +0000 UTC m=+0.138858828 container init 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:18:32 np0005466030 podman[245387]: 2025-10-02 12:18:32.689766082 +0000 UTC m=+0.144257107 container start 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:32 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : New worker (245408) forked
Oct  2 08:18:32 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : Loading success.
Oct  2 08:18:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:18:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:18:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:18:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.898 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.900 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407512.8984327, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.900 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.903 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.906 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance spawned successfully.#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.906 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.934 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.939 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.942 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.942 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.943 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.943 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.943 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.944 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.983 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.983 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407512.8998206, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:32 np0005466030 nova_compute[230518]: 2025-10-02 12:18:32.984 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:18:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.034 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.037 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407512.9027326, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.037 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.073 2 INFO nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 8.47 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.074 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.075 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.080 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.121 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.161 2 INFO nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 10.52 seconds to build instance.#033[00m
Oct  2 08:18:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:33 np0005466030 nova_compute[230518]: 2025-10-02 12:18:33.182 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:33.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:34 np0005466030 nova_compute[230518]: 2025-10-02 12:18:34.528 2 DEBUG nova.compute.manager [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:34 np0005466030 nova_compute[230518]: 2025-10-02 12:18:34.529 2 DEBUG oslo_concurrency.lockutils [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:34 np0005466030 nova_compute[230518]: 2025-10-02 12:18:34.529 2 DEBUG oslo_concurrency.lockutils [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:34 np0005466030 nova_compute[230518]: 2025-10-02 12:18:34.529 2 DEBUG oslo_concurrency.lockutils [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:34 np0005466030 nova_compute[230518]: 2025-10-02 12:18:34.530 2 DEBUG nova.compute.manager [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:34 np0005466030 nova_compute[230518]: 2025-10-02 12:18:34.530 2 WARNING nova.compute.manager [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:18:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:35.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:35 np0005466030 nova_compute[230518]: 2025-10-02 12:18:35.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:35 np0005466030 podman[245417]: 2025-10-02 12:18:35.82246764 +0000 UTC m=+0.069890050 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:18:35 np0005466030 podman[245435]: 2025-10-02 12:18:35.896408455 +0000 UTC m=+0.048320621 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:18:36 np0005466030 nova_compute[230518]: 2025-10-02 12:18:36.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:36 np0005466030 nova_compute[230518]: 2025-10-02 12:18:36.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:36 np0005466030 nova_compute[230518]: 2025-10-02 12:18:36.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:18:36 np0005466030 nova_compute[230518]: 2025-10-02 12:18:36.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:37.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:37.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.087 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4179098919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.557 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.663 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.663 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.810 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.811 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4638MB free_disk=20.907501220703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.812 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.812 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.903 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.904 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.904 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:18:38 np0005466030 nova_compute[230518]: 2025-10-02 12:18:38.952 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.096 2 DEBUG nova.compute.manager [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.096 2 DEBUG nova.compute.manager [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing instance network info cache due to event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.097 2 DEBUG oslo_concurrency.lockutils [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.097 2 DEBUG oslo_concurrency.lockutils [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.097 2 DEBUG nova.network.neutron [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:18:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:39.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:39.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:18:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/951820212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.427 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.432 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.449 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.477 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:18:39 np0005466030 nova_compute[230518]: 2025-10-02 12:18:39.478 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.304 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.304 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.304 2 INFO nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Rebooting instance#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.364 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.473 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.474 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.474 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.474 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:18:40 np0005466030 nova_compute[230518]: 2025-10-02 12:18:40.544 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:41.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:41 np0005466030 nova_compute[230518]: 2025-10-02 12:18:41.777 2 DEBUG nova.network.neutron [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated VIF entry in instance network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:18:41 np0005466030 nova_compute[230518]: 2025-10-02 12:18:41.778 2 DEBUG nova.network.neutron [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:41 np0005466030 nova_compute[230518]: 2025-10-02 12:18:41.865 2 DEBUG oslo_concurrency.lockutils [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:41 np0005466030 nova_compute[230518]: 2025-10-02 12:18:41.865 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:41 np0005466030 nova_compute[230518]: 2025-10-02 12:18:41.866 2 DEBUG nova.network.neutron [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:18:41 np0005466030 nova_compute[230518]: 2025-10-02 12:18:41.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:43.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:44 np0005466030 nova_compute[230518]: 2025-10-02 12:18:44.576 2 DEBUG nova.network.neutron [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:44 np0005466030 nova_compute[230518]: 2025-10-02 12:18:44.794 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:44 np0005466030 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:44 np0005466030 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:44 np0005466030 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:18:44 np0005466030 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:45.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:45 np0005466030 nova_compute[230518]: 2025-10-02 12:18:45.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:45 np0005466030 kernel: tap02efafc4-ff (unregistering): left promiscuous mode
Oct  2 08:18:45 np0005466030 NetworkManager[44960]: <info>  [1759407525.8754] device (tap02efafc4-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:18:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:45Z|00154|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=0)
Oct  2 08:18:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:45Z|00155|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down in Southbound
Oct  2 08:18:45 np0005466030 nova_compute[230518]: 2025-10-02 12:18:45.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:45Z|00156|binding|INFO|Removing iface tap02efafc4-ff ovn-installed in OVS
Oct  2 08:18:45 np0005466030 nova_compute[230518]: 2025-10-02 12:18:45.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:45 np0005466030 nova_compute[230518]: 2025-10-02 12:18:45.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.949 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.951 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis#033[00m
Oct  2 08:18:45 np0005466030 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct  2 08:18:45 np0005466030 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000023.scope: Consumed 12.148s CPU time.
Oct  2 08:18:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.954 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:45 np0005466030 systemd-machined[188247]: Machine qemu-18-instance-00000023 terminated.
Oct  2 08:18:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.955 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4afd6253-e189-4a5c-bc16-8a3ce385e12c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.956 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace which is not needed anymore#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.098 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance destroyed successfully.#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.099 2 DEBUG nova.objects.instance [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'resources' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.166 2 DEBUG nova.virt.libvirt.vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:45Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.167 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.168 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.168 2 DEBUG os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02efafc4-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.175 2 INFO os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.182 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start _get_guest_xml network_info=[{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.184 2 WARNING nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.188 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.189 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.193 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.194 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.194 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.objects.instance [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.242 2 DEBUG nova.compute.manager [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.242 2 DEBUG oslo_concurrency.lockutils [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.243 2 DEBUG oslo_concurrency.lockutils [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.243 2 DEBUG oslo_concurrency.lockutils [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.243 2 DEBUG nova.compute.manager [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.244 2 WARNING nova.compute.manager [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.249 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:46 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : haproxy version is 2.8.14-c23fe91
Oct  2 08:18:46 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : path to executable is /usr/sbin/haproxy
Oct  2 08:18:46 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [WARNING]  (245406) : Exiting Master process...
Oct  2 08:18:46 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [ALERT]    (245406) : Current worker (245408) exited with code 143 (Terminated)
Oct  2 08:18:46 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [WARNING]  (245406) : All workers exited. Exiting... (0)
Oct  2 08:18:46 np0005466030 systemd[1]: libpod-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf.scope: Deactivated successfully.
Oct  2 08:18:46 np0005466030 podman[245526]: 2025-10-02 12:18:46.341647277 +0000 UTC m=+0.302556547 container died 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:18:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:18:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/704874661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:18:46 np0005466030 nova_compute[230518]: 2025-10-02 12:18:46.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.001 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:47 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf-userdata-shm.mount: Deactivated successfully.
Oct  2 08:18:47 np0005466030 systemd[1]: var-lib-containers-storage-overlay-8dd8cce23eed1d8f8a72aa27661a61a05c7b042f2a5fd87369a7dd98a871dfe3-merged.mount: Deactivated successfully.
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.037 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.063 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.103 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:18:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:47.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:47 np0005466030 podman[245526]: 2025-10-02 12:18:47.416459371 +0000 UTC m=+1.377368581 container cleanup 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:18:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:18:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4246629214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:18:47 np0005466030 systemd[1]: libpod-conmon-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf.scope: Deactivated successfully.
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.447 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.449 2 DEBUG nova.virt.libvirt.vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:45Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.449 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.450 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.453 2 DEBUG nova.objects.instance [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.522 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <uuid>4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</uuid>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <name>instance-00000023</name>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <nova:name>tempest-SecurityGroupsTestJSON-server-488412587</nova:name>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:18:46</nova:creationTime>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:user uuid="2ed8b6a2129742dfb3b8a0d9f044ac24">tempest-SecurityGroupsTestJSON-1241678427-project-member</nova:user>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:project uuid="f0bd0c6232b84d03a010ba8cf85bda46">tempest-SecurityGroupsTestJSON-1241678427</nova:project>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <nova:port uuid="02efafc4-ff2d-47ca-98bd-8e608e9980b8">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <entry name="serial">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <entry name="uuid">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:30:9c:4f"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <target dev="tap02efafc4-ff"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/console.log" append="off"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:18:47 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:18:47 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:18:47 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:18:47 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.525 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.525 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.527 2 DEBUG nova.virt.libvirt.vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:45Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.527 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.528 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.528 2 DEBUG os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.531 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02efafc4-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02efafc4-ff, col_values=(('external_ids', {'iface-id': '02efafc4-ff2d-47ca-98bd-8e608e9980b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:9c:4f', 'vm-uuid': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:47 np0005466030 NetworkManager[44960]: <info>  [1759407527.5374] manager: (tap02efafc4-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.544 2 INFO os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')#033[00m
Oct  2 08:18:47 np0005466030 kernel: tap02efafc4-ff: entered promiscuous mode
Oct  2 08:18:47 np0005466030 systemd-udevd[245505]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:47 np0005466030 NetworkManager[44960]: <info>  [1759407527.8282] manager: (tap02efafc4-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct  2 08:18:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:47Z|00157|binding|INFO|Claiming lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 for this chassis.
Oct  2 08:18:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:47Z|00158|binding|INFO|02efafc4-ff2d-47ca-98bd-8e608e9980b8: Claiming fa:16:3e:30:9c:4f 10.100.0.5
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 NetworkManager[44960]: <info>  [1759407527.8407] device (tap02efafc4-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:18:47 np0005466030 NetworkManager[44960]: <info>  [1759407527.8420] device (tap02efafc4-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:18:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:47Z|00159|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 ovn-installed in OVS
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 nova_compute[230518]: 2025-10-02 12:18:47.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:47 np0005466030 systemd-machined[188247]: New machine qemu-19-instance-00000023.
Oct  2 08:18:47 np0005466030 systemd[1]: Started Virtual Machine qemu-19-instance-00000023.
Oct  2 08:18:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:47Z|00160|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 up in Southbound
Oct  2 08:18:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:47.944 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.486 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.487 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.488 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.489 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.489 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.490 2 WARNING nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.490 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.491 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.492 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.492 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.493 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:48 np0005466030 nova_compute[230518]: 2025-10-02 12:18:48.493 2 WARNING nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 08:18:49 np0005466030 podman[245630]: 2025-10-02 12:18:49.070465851 +0000 UTC m=+1.626474365 container remove 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.076 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f92052-c159-42a8-9461-6c890d96988f]: (4, ('Thu Oct  2 12:18:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf)\n4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf\nThu Oct  2 12:18:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf)\n4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.079 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c433b16d-c360-4be4-9098-1cc2e39205a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.080 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005466030 kernel: tapcec9cbfc-50: left promiscuous mode
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.118 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60aaee44-1622-4f3b-9c28-2ca47e8aacec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.147 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ff579398-ce70-40e9-aa8f-ea9b4f1d8c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.150 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c3eecc5f-efa2-4d6b-bbea-af280ca6bfb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.170 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af882e1a-cd33-4ba6-856b-756c64b5856e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537551, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245685, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 systemd[1]: run-netns-ovnmeta\x2dcec9cbfc\x2d5dec\x2d4f85\x2d90c5\x2d6104a054547f.mount: Deactivated successfully.
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.175 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.175 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3b71d8-ddf5-40d7-8eae-b469a62daaf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.176 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.177 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cec9cbfc-5dec-4f85-90c5-6104a054547f#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.191 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3aa686-1a85-41d2-8cb9-a09fae78dff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.193 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcec9cbfc-51 in ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.195 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcec9cbfc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[50431427-a749-4a6d-8d56-dfc19b38e56f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.196 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[23ea38dc-838a-485b-b733-82e519eb3819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:49.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.207 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[414d5056-eabd-408a-9128-40ef01f74559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.233 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e55403b-fee9-4f65-a81c-4974872a873f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.261 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bef62267-8298-4275-971a-dc4a0e923292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.274 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[34793f05-4050-41c5-859f-7d1138f393e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 NetworkManager[44960]: <info>  [1759407529.2775] manager: (tapcec9cbfc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Oct  2 08:18:49 np0005466030 systemd-udevd[245684]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.318 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0848bc2f-4c89-4b9a-98fb-7bcdf99a0088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.320 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ec385dc2-0652-4ca8-8c6f-15d609932aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 NetworkManager[44960]: <info>  [1759407529.3519] device (tapcec9cbfc-50): carrier: link connected
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.358 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16f2be79-eafc-4843-ba59-f858f7f0fd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.374 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6f7af9-832a-41c6-a3da-795c7188e5bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539290, 'reachable_time': 35141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245737, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.388 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4393bc37-1e3e-4fc3-aca1-49b55d1c772b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:917'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539290, 'tstamp': 539290}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245738, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e886d793-5093-4c79-b6c8-d95b41133bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539290, 'reachable_time': 35141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245740, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.435 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[990d6c4a-507a-4d89-a8eb-06a732415674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.491 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa1f6d5-229d-4e86-8b3f-5d435b2f5741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.492 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.493 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.493 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcec9cbfc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:49 np0005466030 kernel: tapcec9cbfc-50: entered promiscuous mode
Oct  2 08:18:49 np0005466030 NetworkManager[44960]: <info>  [1759407529.5311] manager: (tapcec9cbfc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.534 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcec9cbfc-50, col_values=(('external_ids', {'iface-id': '7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:49Z|00161|binding|INFO|Releasing lport 7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4 from this chassis (sb_readonly=0)
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.553 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.554 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0719288d-b471-40e2-9116-bf38082e5b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.555 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:18:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.557 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'env', 'PROCESS_TAG=haproxy-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cec9cbfc-5dec-4f85-90c5-6104a054547f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.852 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.853 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407529.8519335, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.853 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.855 2 DEBUG nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.858 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance rebooted successfully.#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.859 2 DEBUG nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.950 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:49 np0005466030 nova_compute[230518]: 2025-10-02 12:18:49.953 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.002 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.003 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407529.8536701, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.004 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:18:50 np0005466030 podman[245772]: 2025-10-02 12:18:49.932345598 +0000 UTC m=+0.025478402 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.054 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.057 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.138 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 9.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG nova.compute.manager [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG oslo_concurrency.lockutils [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG oslo_concurrency.lockutils [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG oslo_concurrency.lockutils [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.746 2 DEBUG nova.compute.manager [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:50 np0005466030 nova_compute[230518]: 2025-10-02 12:18:50.746 2 WARNING nova.compute.manager [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:18:50 np0005466030 podman[245772]: 2025-10-02 12:18:50.798203185 +0000 UTC m=+0.891336009 container create 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:51 np0005466030 systemd[1]: Started libpod-conmon-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548.scope.
Oct  2 08:18:51 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:18:51 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55dcfce4eabfae633f16d7a1060e2abf279c1e0a7770e612687fe926476f20b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:18:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:51.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:18:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:51.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:18:51 np0005466030 podman[245772]: 2025-10-02 12:18:51.478549001 +0000 UTC m=+1.571681855 container init 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:51 np0005466030 podman[245772]: 2025-10-02 12:18:51.486630506 +0000 UTC m=+1.579763300 container start 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:18:51 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : New worker (245793) forked
Oct  2 08:18:51 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : Loading success.
Oct  2 08:18:51 np0005466030 nova_compute[230518]: 2025-10-02 12:18:51.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:52 np0005466030 nova_compute[230518]: 2025-10-02 12:18:52.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:53.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:53.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:54 np0005466030 nova_compute[230518]: 2025-10-02 12:18:54.269 2 DEBUG nova.compute.manager [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:54 np0005466030 nova_compute[230518]: 2025-10-02 12:18:54.270 2 DEBUG nova.compute.manager [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing instance network info cache due to event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:18:54 np0005466030 nova_compute[230518]: 2025-10-02 12:18:54.270 2 DEBUG oslo_concurrency.lockutils [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:18:54 np0005466030 nova_compute[230518]: 2025-10-02 12:18:54.270 2 DEBUG oslo_concurrency.lockutils [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:18:54 np0005466030 nova_compute[230518]: 2025-10-02 12:18:54.271 2 DEBUG nova.network.neutron [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:18:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:55.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.433 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.434 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.435 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.435 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.436 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.437 2 INFO nova.compute.manager [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Terminating instance#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.439 2 DEBUG nova.compute.manager [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:18:55 np0005466030 kernel: tap02efafc4-ff (unregistering): left promiscuous mode
Oct  2 08:18:55 np0005466030 NetworkManager[44960]: <info>  [1759407535.7162] device (tap02efafc4-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00162|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=0)
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00163|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down in Southbound
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00164|binding|INFO|Removing iface tap02efafc4-ff ovn-installed in OVS
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:55 np0005466030 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct  2 08:18:55 np0005466030 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000023.scope: Consumed 6.893s CPU time.
Oct  2 08:18:55 np0005466030 systemd-machined[188247]: Machine qemu-19-instance-00000023 terminated.
Oct  2 08:18:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.817 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 4436e9fc-9948-4a06-9ade-f6478eba2f67 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.819 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis#033[00m
Oct  2 08:18:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.820 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.821 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c76c1fd-06d0-464c-bd35-a03eec057c5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.821 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace which is not needed anymore#033[00m
Oct  2 08:18:55 np0005466030 systemd-udevd[245861]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:18:55 np0005466030 kernel: tap02efafc4-ff: entered promiscuous mode
Oct  2 08:18:55 np0005466030 NetworkManager[44960]: <info>  [1759407535.8632] manager: (tap02efafc4-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00165|binding|INFO|Claiming lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 for this chassis.
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00166|binding|INFO|02efafc4-ff2d-47ca-98bd-8e608e9980b8: Claiming fa:16:3e:30:9c:4f 10.100.0.5
Oct  2 08:18:55 np0005466030 kernel: tap02efafc4-ff (unregistering): left promiscuous mode
Oct  2 08:18:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.888 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 4436e9fc-9948-4a06-9ade-f6478eba2f67 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.898 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance destroyed successfully.#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.899 2 DEBUG nova.objects.instance [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'resources' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00167|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 ovn-installed in OVS
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00168|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 up in Southbound
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00169|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=1)
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00170|binding|INFO|Removing iface tap02efafc4-ff ovn-installed in OVS
Oct  2 08:18:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:55Z|00171|if_status|INFO|Not setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down as sb is readonly
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:55 np0005466030 nova_compute[230518]: 2025-10-02 12:18:55.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:55 np0005466030 podman[245912]: 2025-10-02 12:18:55.994073215 +0000 UTC m=+0.116161471 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:18:56 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:56Z|00172|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=0)
Oct  2 08:18:56 np0005466030 ovn_controller[129257]: 2025-10-02T12:18:56Z|00173|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down in Southbound
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.043 2 DEBUG nova.virt.libvirt.vif [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:50Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.043 2 DEBUG nova.network.os_vif_util [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.044 2 DEBUG nova.network.os_vif_util [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.044 2 DEBUG os_vif [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02efafc4-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.052 2 INFO os_vif [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')#033[00m
Oct  2 08:18:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:56.067 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 4436e9fc-9948-4a06-9ade-f6478eba2f67 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.228 2 DEBUG nova.compute.manager [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG oslo_concurrency.lockutils [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG oslo_concurrency.lockutils [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG oslo_concurrency.lockutils [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG nova.compute.manager [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.230 2 DEBUG nova.compute.manager [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:18:56 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : haproxy version is 2.8.14-c23fe91
Oct  2 08:18:56 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : path to executable is /usr/sbin/haproxy
Oct  2 08:18:56 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [WARNING]  (245791) : Exiting Master process...
Oct  2 08:18:56 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [WARNING]  (245791) : Exiting Master process...
Oct  2 08:18:56 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [ALERT]    (245791) : Current worker (245793) exited with code 143 (Terminated)
Oct  2 08:18:56 np0005466030 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [WARNING]  (245791) : All workers exited. Exiting... (0)
Oct  2 08:18:56 np0005466030 systemd[1]: libpod-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548.scope: Deactivated successfully.
Oct  2 08:18:56 np0005466030 podman[245951]: 2025-10-02 12:18:56.25576907 +0000 UTC m=+0.270575066 container died 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.376 2 DEBUG nova.network.neutron [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated VIF entry in instance network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.376 2 DEBUG nova.network.neutron [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.419 2 DEBUG oslo_concurrency.lockutils [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:18:56 np0005466030 podman[245921]: 2025-10-02 12:18:56.859848744 +0000 UTC m=+0.977629555 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:18:56 np0005466030 nova_compute[230518]: 2025-10-02 12:18:56.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:57 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548-userdata-shm.mount: Deactivated successfully.
Oct  2 08:18:57 np0005466030 systemd[1]: var-lib-containers-storage-overlay-d55dcfce4eabfae633f16d7a1060e2abf279c1e0a7770e612687fe926476f20b-merged.mount: Deactivated successfully.
Oct  2 08:18:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:18:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:57.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:18:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:57 np0005466030 podman[245951]: 2025-10-02 12:18:57.613607672 +0000 UTC m=+1.628413638 container cleanup 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:18:57 np0005466030 systemd[1]: libpod-conmon-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548.scope: Deactivated successfully.
Oct  2 08:18:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.415 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.416 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.416 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.416 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.417 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.417 2 WARNING nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.417 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.417 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.418 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.418 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.418 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.418 2 WARNING nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.419 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.419 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.419 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.420 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.420 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.420 2 WARNING nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:18:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:18:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:18:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:18:58 np0005466030 podman[246046]: 2025-10-02 12:18:58.788644025 +0000 UTC m=+1.148503438 container remove 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.795 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cdc392-6508-42f9-9c8d-173f705620ae]: (4, ('Thu Oct  2 12:18:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548)\n8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548\nThu Oct  2 12:18:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548)\n8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.796 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[42db7bf5-e999-4469-b1c7-bdaeee7c6fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.797 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:58 np0005466030 kernel: tapcec9cbfc-50: left promiscuous mode
Oct  2 08:18:58 np0005466030 nova_compute[230518]: 2025-10-02 12:18:58.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.826 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae29cd2-0df0-4ff3-b83a-3f309ddbc544]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c10c89a2-2243-4584-ad76-c8b85032b56c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.857 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7e71e0dd-7d2f-423d-a230-03a27c215a9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.874 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4be838-3d27-449a-aff6-466a95e2448c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539281, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246060, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.877 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.877 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[a31c0275-660a-4167-98cf-24a3f083216f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.878 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis#033[00m
Oct  2 08:18:58 np0005466030 systemd[1]: run-netns-ovnmeta\x2dcec9cbfc\x2d5dec\x2d4f85\x2d90c5\x2d6104a054547f.mount: Deactivated successfully.
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.879 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.879 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[358a68e5-3e5c-4a18-ad84-52e8302c4e05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.880 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.880 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:18:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.881 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[86259736-5648-485c-856b-5734da8f5800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:18:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:59.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:18:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:18:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:18:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.691 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:19:00 np0005466030 nova_compute[230518]: 2025-10-02 12:19:00.693 2 WARNING nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:19:01 np0005466030 nova_compute[230518]: 2025-10-02 12:19:01.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:01.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:01.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:02 np0005466030 nova_compute[230518]: 2025-10-02 12:19:01.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:02 np0005466030 nova_compute[230518]: 2025-10-02 12:19:02.241 2 INFO nova.virt.libvirt.driver [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deleting instance files /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_del#033[00m
Oct  2 08:19:02 np0005466030 nova_compute[230518]: 2025-10-02 12:19:02.242 2 INFO nova.virt.libvirt.driver [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deletion of /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_del complete#033[00m
Oct  2 08:19:02 np0005466030 nova_compute[230518]: 2025-10-02 12:19:02.288 2 INFO nova.compute.manager [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 6.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:19:02 np0005466030 nova_compute[230518]: 2025-10-02 12:19:02.289 2 DEBUG oslo.service.loopingcall [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:19:02 np0005466030 nova_compute[230518]: 2025-10-02 12:19:02.289 2 DEBUG nova.compute.manager [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:19:02 np0005466030 nova_compute[230518]: 2025-10-02 12:19:02.289 2 DEBUG nova.network.neutron [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:19:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:03.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:03 np0005466030 nova_compute[230518]: 2025-10-02 12:19:03.507 2 DEBUG nova.network.neutron [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:03 np0005466030 nova_compute[230518]: 2025-10-02 12:19:03.542 2 INFO nova.compute.manager [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 1.25 seconds to deallocate network for instance.#033[00m
Oct  2 08:19:03 np0005466030 nova_compute[230518]: 2025-10-02 12:19:03.586 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:03 np0005466030 nova_compute[230518]: 2025-10-02 12:19:03.586 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:03 np0005466030 nova_compute[230518]: 2025-10-02 12:19:03.632 2 DEBUG oslo_concurrency.processutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:03 np0005466030 nova_compute[230518]: 2025-10-02 12:19:03.687 2 DEBUG nova.compute.manager [req-aae77d3b-b86f-441c-854c-64d40eb2a223 req-59cbc810-d4d9-495a-979e-ca54af3ad8b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-deleted-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:19:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/439886864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:04 np0005466030 nova_compute[230518]: 2025-10-02 12:19:04.068 2 DEBUG oslo_concurrency.processutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:04 np0005466030 nova_compute[230518]: 2025-10-02 12:19:04.076 2 DEBUG nova.compute.provider_tree [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:04 np0005466030 nova_compute[230518]: 2025-10-02 12:19:04.097 2 DEBUG nova.scheduler.client.report [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:04 np0005466030 nova_compute[230518]: 2025-10-02 12:19:04.121 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:04 np0005466030 nova_compute[230518]: 2025-10-02 12:19:04.152 2 INFO nova.scheduler.client.report [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Deleted allocations for instance 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1#033[00m
Oct  2 08:19:04 np0005466030 nova_compute[230518]: 2025-10-02 12:19:04.243 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:19:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:05.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:19:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:05.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:05.716 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:05 np0005466030 nova_compute[230518]: 2025-10-02 12:19:05.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:05.717 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:19:06 np0005466030 nova_compute[230518]: 2025-10-02 12:19:06.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:06 np0005466030 podman[246088]: 2025-10-02 12:19:06.800715738 +0000 UTC m=+0.057379499 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:19:06 np0005466030 podman[246089]: 2025-10-02 12:19:06.803751403 +0000 UTC m=+0.058680469 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:19:06 np0005466030 nova_compute[230518]: 2025-10-02 12:19:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:07.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:09.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:09.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:10 np0005466030 nova_compute[230518]: 2025-10-02 12:19:10.896 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407535.8948076, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:10 np0005466030 nova_compute[230518]: 2025-10-02 12:19:10.896 2 INFO nova.compute.manager [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:19:10 np0005466030 nova_compute[230518]: 2025-10-02 12:19:10.947 2 DEBUG nova.compute.manager [None req-2d379b19-6c12-41c1-900d-7b53ba97d571 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:11 np0005466030 nova_compute[230518]: 2025-10-02 12:19:11.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:12 np0005466030 nova_compute[230518]: 2025-10-02 12:19:12.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:13.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:13.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:19:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:19:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:15.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:15.718 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.256 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "9e5efd35-44d3-4665-b150-1936a55a5460" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.256 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.413 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.575 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.575 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.582 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:19:16 np0005466030 nova_compute[230518]: 2025-10-02 12:19:16.583 2 INFO nova.compute.claims [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:19:17 np0005466030 nova_compute[230518]: 2025-10-02 12:19:17.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:17 np0005466030 nova_compute[230518]: 2025-10-02 12:19:17.036 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:17.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2889312449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:17 np0005466030 nova_compute[230518]: 2025-10-02 12:19:17.502 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:17 np0005466030 nova_compute[230518]: 2025-10-02 12:19:17.509 2 DEBUG nova.compute.provider_tree [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:17 np0005466030 nova_compute[230518]: 2025-10-02 12:19:17.579 2 DEBUG nova.scheduler.client.report [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:17 np0005466030 nova_compute[230518]: 2025-10-02 12:19:17.864 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:17 np0005466030 nova_compute[230518]: 2025-10-02 12:19:17.865 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:19:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.135 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.135 2 DEBUG nova.network.neutron [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.473 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.710 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.762 2 DEBUG nova.network.neutron [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.762 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.980 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.981 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:19:18 np0005466030 nova_compute[230518]: 2025-10-02 12:19:18.982 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Creating image(s)#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.009 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.034 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.058 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.064 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.126 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.127 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.128 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.128 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.151 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.153 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9e5efd35-44d3-4665-b150-1936a55a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:19.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.473 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9e5efd35-44d3-4665-b150-1936a55a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.538 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] resizing rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.651 2 DEBUG nova.objects.instance [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lazy-loading 'migration_context' on Instance uuid 9e5efd35-44d3-4665-b150-1936a55a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.702 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.702 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Ensure instance console log exists: /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.703 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.703 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.703 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.704 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.709 2 WARNING nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.713 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.714 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.717 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.718 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.719 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.719 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.719 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.721 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.721 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.721 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.722 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:19:19 np0005466030 nova_compute[230518]: 2025-10-02 12:19:19.725 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2621466651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:20 np0005466030 nova_compute[230518]: 2025-10-02 12:19:20.226 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:20 np0005466030 nova_compute[230518]: 2025-10-02 12:19:20.252 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:20 np0005466030 nova_compute[230518]: 2025-10-02 12:19:20.256 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3271792050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:20 np0005466030 nova_compute[230518]: 2025-10-02 12:19:20.689 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:20 np0005466030 nova_compute[230518]: 2025-10-02 12:19:20.691 2 DEBUG nova.objects.instance [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e5efd35-44d3-4665-b150-1936a55a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:20 np0005466030 nova_compute[230518]: 2025-10-02 12:19:20.999 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <uuid>9e5efd35-44d3-4665-b150-1936a55a5460</uuid>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <name>instance-00000026</name>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerDiagnosticsTest-server-783920138</nova:name>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:19:19</nova:creationTime>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <nova:user uuid="6d3f08c1bc2844488f5d3fcd1622dd59">tempest-ServerDiagnosticsTest-710539956-project-member</nova:user>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <nova:project uuid="aadd85819f8e46b8b2dc6524e9e6fbad">tempest-ServerDiagnosticsTest-710539956</nova:project>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <entry name="serial">9e5efd35-44d3-4665-b150-1936a55a5460</entry>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <entry name="uuid">9e5efd35-44d3-4665-b150-1936a55a5460</entry>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/9e5efd35-44d3-4665-b150-1936a55a5460_disk">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/9e5efd35-44d3-4665-b150-1936a55a5460_disk.config">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/console.log" append="off"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:19:21 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:19:21 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:19:21 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:19:21 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:19:21 np0005466030 nova_compute[230518]: 2025-10-02 12:19:21.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:21.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:21.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:21 np0005466030 nova_compute[230518]: 2025-10-02 12:19:21.875 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:21 np0005466030 nova_compute[230518]: 2025-10-02 12:19:21.875 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:21 np0005466030 nova_compute[230518]: 2025-10-02 12:19:21.876 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Using config drive#033[00m
Oct  2 08:19:21 np0005466030 nova_compute[230518]: 2025-10-02 12:19:21.911 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:22 np0005466030 nova_compute[230518]: 2025-10-02 12:19:22.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:22 np0005466030 nova_compute[230518]: 2025-10-02 12:19:22.763 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Creating config drive at /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config#033[00m
Oct  2 08:19:22 np0005466030 nova_compute[230518]: 2025-10-02 12:19:22.770 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp374hv1vu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:22 np0005466030 nova_compute[230518]: 2025-10-02 12:19:22.906 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp374hv1vu" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:22 np0005466030 nova_compute[230518]: 2025-10-02 12:19:22.941 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:22 np0005466030 nova_compute[230518]: 2025-10-02 12:19:22.946 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:23 np0005466030 nova_compute[230518]: 2025-10-02 12:19:23.137 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:23 np0005466030 nova_compute[230518]: 2025-10-02 12:19:23.138 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deleting local config drive /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config because it was imported into RBD.#033[00m
Oct  2 08:19:23 np0005466030 systemd-machined[188247]: New machine qemu-20-instance-00000026.
Oct  2 08:19:23 np0005466030 systemd[1]: Started Virtual Machine qemu-20-instance-00000026.
Oct  2 08:19:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:23.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:23.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.044 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407564.044375, 9e5efd35-44d3-4665-b150-1936a55a5460 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.046 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.049 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.049 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.053 2 INFO nova.virt.libvirt.driver [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance spawned successfully.#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.053 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.077 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.081 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.094 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.095 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.095 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.095 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.096 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.096 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.135 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.135 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407564.0456746, 9e5efd35-44d3-4665-b150-1936a55a5460 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.136 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] VM Started (Lifecycle Event)#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.167 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.170 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.176 2 INFO nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 5.19 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.176 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.236 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:24 np0005466030 nova_compute[230518]: 2025-10-02 12:19:24.283 2 INFO nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 7.74 seconds to build instance.#033[00m
Oct  2 08:19:25 np0005466030 nova_compute[230518]: 2025-10-02 12:19:25.077 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:25.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:25.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:25.918 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:25.918 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:26 np0005466030 nova_compute[230518]: 2025-10-02 12:19:26.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:26 np0005466030 podman[246544]: 2025-10-02 12:19:26.807153783 +0000 UTC m=+0.053587920 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:19:27 np0005466030 nova_compute[230518]: 2025-10-02 12:19:27.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:27.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:27.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:27 np0005466030 podman[246564]: 2025-10-02 12:19:27.815954257 +0000 UTC m=+0.074353343 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:19:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:28 np0005466030 nova_compute[230518]: 2025-10-02 12:19:28.798 2 DEBUG nova.compute.manager [None req-3dafd338-59c0-4be9-8f20-ca68390c7257 6512335913754777991846ad7621947f 4acd8d33638648bf90e723d49a24f77e - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:28 np0005466030 nova_compute[230518]: 2025-10-02 12:19:28.801 2 INFO nova.compute.manager [None req-3dafd338-59c0-4be9-8f20-ca68390c7257 6512335913754777991846ad7621947f 4acd8d33638648bf90e723d49a24f77e - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Retrieving diagnostics#033[00m
Oct  2 08:19:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:29.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.994 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "9e5efd35-44d3-4665-b150-1936a55a5460" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.995 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.995 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "9e5efd35-44d3-4665-b150-1936a55a5460-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.996 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.996 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.997 2 INFO nova.compute.manager [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Terminating instance#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.998 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "refresh_cache-9e5efd35-44d3-4665-b150-1936a55a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.999 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquired lock "refresh_cache-9e5efd35-44d3-4665-b150-1936a55a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:19:29 np0005466030 nova_compute[230518]: 2025-10-02 12:19:29.999 2 DEBUG nova.network.neutron [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:19:30 np0005466030 nova_compute[230518]: 2025-10-02 12:19:30.451 2 DEBUG nova.network.neutron [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:19:30 np0005466030 nova_compute[230518]: 2025-10-02 12:19:30.995 2 DEBUG nova.network.neutron [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:31 np0005466030 nova_compute[230518]: 2025-10-02 12:19:31.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:31 np0005466030 nova_compute[230518]: 2025-10-02 12:19:31.259 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Releasing lock "refresh_cache-9e5efd35-44d3-4665-b150-1936a55a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:19:31 np0005466030 nova_compute[230518]: 2025-10-02 12:19:31.259 2 DEBUG nova.compute.manager [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:19:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:31.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:31.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:31 np0005466030 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct  2 08:19:31 np0005466030 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000026.scope: Consumed 8.099s CPU time.
Oct  2 08:19:31 np0005466030 systemd-machined[188247]: Machine qemu-20-instance-00000026 terminated.
Oct  2 08:19:31 np0005466030 nova_compute[230518]: 2025-10-02 12:19:31.481 2 INFO nova.virt.libvirt.driver [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance destroyed successfully.#033[00m
Oct  2 08:19:31 np0005466030 nova_compute[230518]: 2025-10-02 12:19:31.482 2 DEBUG nova.objects.instance [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lazy-loading 'resources' on Instance uuid 9e5efd35-44d3-4665-b150-1936a55a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:32 np0005466030 nova_compute[230518]: 2025-10-02 12:19:32.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.073 2 INFO nova.virt.libvirt.driver [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deleting instance files /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460_del#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.073 2 INFO nova.virt.libvirt.driver [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deletion of /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460_del complete#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.357 2 INFO nova.compute.manager [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 3.10 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.357 2 DEBUG oslo.service.loopingcall [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.358 2 DEBUG nova.compute.manager [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.358 2 DEBUG nova.network.neutron [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.528 2 DEBUG nova.network.neutron [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.649 2 DEBUG nova.network.neutron [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:19:34 np0005466030 nova_compute[230518]: 2025-10-02 12:19:34.802 2 INFO nova.compute.manager [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 0.44 seconds to deallocate network for instance.#033[00m
Oct  2 08:19:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:19:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:35.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:19:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:35.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:36 np0005466030 nova_compute[230518]: 2025-10-02 12:19:36.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:36 np0005466030 nova_compute[230518]: 2025-10-02 12:19:36.112 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:36 np0005466030 nova_compute[230518]: 2025-10-02 12:19:36.113 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:37 np0005466030 nova_compute[230518]: 2025-10-02 12:19:37.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:37 np0005466030 nova_compute[230518]: 2025-10-02 12:19:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:37 np0005466030 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  2 08:19:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:37.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:37.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:37 np0005466030 podman[246613]: 2025-10-02 12:19:37.33238965 +0000 UTC m=+0.059986161 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:19:37 np0005466030 podman[246614]: 2025-10-02 12:19:37.34192341 +0000 UTC m=+0.062929274 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.166 2 DEBUG oslo_concurrency.processutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1202750396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.679 2 DEBUG oslo_concurrency.processutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.686 2 DEBUG nova.compute.provider_tree [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:38 np0005466030 nova_compute[230518]: 2025-10-02 12:19:38.995 2 DEBUG nova.scheduler.client.report [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.116 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.119 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.119 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.286 2 INFO nova.scheduler.client.report [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Deleted allocations for instance 9e5efd35-44d3-4665-b150-1936a55a5460#033[00m
Oct  2 08:19:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.484 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3298273475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.554 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.734 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4752MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.854 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.855 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:19:39 np0005466030 nova_compute[230518]: 2025-10-02 12:19:39.881 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:40 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3478166263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:40 np0005466030 nova_compute[230518]: 2025-10-02 12:19:40.341 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:40 np0005466030 nova_compute[230518]: 2025-10-02 12:19:40.346 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:40 np0005466030 nova_compute[230518]: 2025-10-02 12:19:40.396 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:40 np0005466030 nova_compute[230518]: 2025-10-02 12:19:40.548 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:19:40 np0005466030 nova_compute[230518]: 2025-10-02 12:19:40.549 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:19:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:41.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:41 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:41.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.544 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.545 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.590 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.590 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.590 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.619 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.619 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.620 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:41 np0005466030 nova_compute[230518]: 2025-10-02 12:19:41.620 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:42 np0005466030 nova_compute[230518]: 2025-10-02 12:19:42.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:42 np0005466030 nova_compute[230518]: 2025-10-02 12:19:42.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:19:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:19:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:43 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:45.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:45.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:46 np0005466030 nova_compute[230518]: 2025-10-02 12:19:46.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:46 np0005466030 nova_compute[230518]: 2025-10-02 12:19:46.481 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407571.478988, 9e5efd35-44d3-4665-b150-1936a55a5460 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:46 np0005466030 nova_compute[230518]: 2025-10-02 12:19:46.481 2 INFO nova.compute.manager [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:19:46 np0005466030 nova_compute[230518]: 2025-10-02 12:19:46.511 2 DEBUG nova.compute.manager [None req-cee3fc9e-fab3-40dd-ad19-b53b5c4b8d3d - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:47 np0005466030 nova_compute[230518]: 2025-10-02 12:19:47.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:47.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:47.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:49.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:49.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:49 np0005466030 nova_compute[230518]: 2025-10-02 12:19:49.949 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:49 np0005466030 nova_compute[230518]: 2025-10-02 12:19:49.950 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:49 np0005466030 nova_compute[230518]: 2025-10-02 12:19:49.977 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.054 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.054 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.064 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.064 2 INFO nova.compute.claims [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.213 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1401379172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.662 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.668 2 DEBUG nova.compute.provider_tree [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.689 2 DEBUG nova.scheduler.client.report [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.715 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.715 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.817 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.842 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.856 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.978 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.979 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:19:50 np0005466030 nova_compute[230518]: 2025-10-02 12:19:50.979 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating image(s)#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.005 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.032 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.064 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.068 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.139 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.140 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.141 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.141 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.165 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:51 np0005466030 nova_compute[230518]: 2025-10-02 12:19:51.169 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:51.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:51.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.103 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.934s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.174 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] resizing rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.294 2 DEBUG nova.objects.instance [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'migration_context' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.311 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.311 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ensure instance console log exists: /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.312 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.312 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.313 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.315 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.320 2 WARNING nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.328 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.329 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.336 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.337 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.338 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.338 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.341 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.343 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3840002178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.820 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.853 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:52 np0005466030 nova_compute[230518]: 2025-10-02 12:19:52.858 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:19:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2080843043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:19:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:53.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.341 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.343 2 DEBUG nova.objects.instance [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'pci_devices' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.362 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <uuid>94bf2d68-bf2c-4720-8ede-688ca2b48ce6</uuid>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <name>instance-00000027</name>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersAdmin275Test-server-2065786220</nova:name>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:19:52</nova:creationTime>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <nova:user uuid="00254a66d4364bc0b5d187d008ba5a9a">tempest-ServersAdmin275Test-1864943547-project-member</nova:user>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <nova:project uuid="b1871b72e3494da299605236b73c241f">tempest-ServersAdmin275Test-1864943547</nova:project>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <entry name="serial">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <entry name="uuid">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log" append="off"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:19:53 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:19:53 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:19:53 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:19:53 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.507 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.508 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.508 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Using config drive#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.615 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.852 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating config drive at /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.856 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgi86ip8x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:53 np0005466030 nova_compute[230518]: 2025-10-02 12:19:53.989 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgi86ip8x" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:54 np0005466030 nova_compute[230518]: 2025-10-02 12:19:54.020 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:54 np0005466030 nova_compute[230518]: 2025-10-02 12:19:54.024 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:54 np0005466030 nova_compute[230518]: 2025-10-02 12:19:54.218 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:54 np0005466030 nova_compute[230518]: 2025-10-02 12:19:54.219 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting local config drive /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:19:54 np0005466030 systemd-machined[188247]: New machine qemu-21-instance-00000027.
Oct  2 08:19:54 np0005466030 systemd[1]: Started Virtual Machine qemu-21-instance-00000027.
Oct  2 08:19:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:54.346 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:19:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:54.347 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:19:54 np0005466030 nova_compute[230518]: 2025-10-02 12:19:54.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:19:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:19:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:55.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.483 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407595.4834824, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.484 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.486 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.487 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.491 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance spawned successfully.#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.491 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.528 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.534 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.535 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.535 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.535 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.536 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.537 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.541 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.578 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.578 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407595.4860685, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.579 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.613 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.619 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.625 2 INFO nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 4.65 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.625 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.638 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.672 2 INFO nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 5.65 seconds to build instance.#033[00m
Oct  2 08:19:55 np0005466030 nova_compute[230518]: 2025-10-02 12:19:55.690 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:56 np0005466030 nova_compute[230518]: 2025-10-02 12:19:56.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:19:56.350 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:19:57 np0005466030 nova_compute[230518]: 2025-10-02 12:19:57.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:19:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:57.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:57 np0005466030 podman[247086]: 2025-10-02 12:19:57.801445737 +0000 UTC m=+0.052356071 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:19:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.252 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.252 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.270 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.322 2 INFO nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Rebuilding instance#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.374 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.374 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.383 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.384 2 INFO nova.compute.claims [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.510 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.569 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.588 2 DEBUG nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.642 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'pci_requests' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.665 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'pci_devices' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.678 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'resources' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.689 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'migration_context' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.702 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.705 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:19:58 np0005466030 podman[247125]: 2025-10-02 12:19:58.849003683 +0000 UTC m=+0.104339428 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:19:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:19:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3945813759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.993 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:58 np0005466030 nova_compute[230518]: 2025-10-02 12:19:58.999 2 DEBUG nova.compute.provider_tree [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.018 2 DEBUG nova.scheduler.client.report [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.042 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.042 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.089 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.090 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.106 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.124 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.232 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.233 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.234 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Creating image(s)#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.257 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.301 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.325 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.329 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:19:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:19:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:19:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:59.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.354 2 DEBUG nova.policy [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.395 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.395 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.396 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.396 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.426 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:19:59 np0005466030 nova_compute[230518]: 2025-10-02 12:19:59.433 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.066 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Successfully created port: 335bf3a1-e291-4896-a8b2-523eb372ebd6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.097 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.165 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] resizing rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:20:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.727 2 DEBUG nova.objects.instance [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.750 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.750 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Ensure instance console log exists: /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.751 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.751 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.751 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.954 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Successfully updated port: 335bf3a1-e291-4896-a8b2-523eb372ebd6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.987 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.988 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:00 np0005466030 nova_compute[230518]: 2025-10-02 12:20:00.988 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:20:01 np0005466030 nova_compute[230518]: 2025-10-02 12:20:01.081 2 DEBUG nova.compute.manager [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-changed-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:01 np0005466030 nova_compute[230518]: 2025-10-02 12:20:01.082 2 DEBUG nova.compute.manager [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Refreshing instance network info cache due to event network-changed-335bf3a1-e291-4896-a8b2-523eb372ebd6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:20:01 np0005466030 nova_compute[230518]: 2025-10-02 12:20:01.082 2 DEBUG oslo_concurrency.lockutils [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:01 np0005466030 nova_compute[230518]: 2025-10-02 12:20:01.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:01 np0005466030 nova_compute[230518]: 2025-10-02 12:20:01.286 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:20:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:01.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:02 np0005466030 nova_compute[230518]: 2025-10-02 12:20:02.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.120 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.139 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.140 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance network_info: |[{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.140 2 DEBUG oslo_concurrency.lockutils [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.141 2 DEBUG nova.network.neutron [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Refreshing network info cache for port 335bf3a1-e291-4896-a8b2-523eb372ebd6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.144 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start _get_guest_xml network_info=[{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.148 2 WARNING nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.152 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.153 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.156 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.156 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.158 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.158 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.158 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.160 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.160 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.160 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.161 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.161 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.163 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:03.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/311085604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.619 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.651 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:03 np0005466030 nova_compute[230518]: 2025-10-02 12:20:03.656 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1221744040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.170 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.171 2 DEBUG nova.virt.libvirt.vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1512000059',display_name='tempest-ImagesTestJSON-server-1512000059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1512000059',id=42,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0z7xplik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:59Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.172 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.172 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.173 2 DEBUG nova.objects.instance [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.200 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <uuid>8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8</uuid>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <name>instance-0000002a</name>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <nova:name>tempest-ImagesTestJSON-server-1512000059</nova:name>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:20:03</nova:creationTime>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <nova:port uuid="335bf3a1-e291-4896-a8b2-523eb372ebd6">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <entry name="serial">8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8</entry>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <entry name="uuid">8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8</entry>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:39:9c:b8"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <target dev="tap335bf3a1-e2"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/console.log" append="off"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:20:04 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:20:04 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:20:04 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:20:04 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.202 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Preparing to wait for external event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.202 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.203 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.203 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.204 2 DEBUG nova.virt.libvirt.vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1512000059',display_name='tempest-ImagesTestJSON-server-1512000059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1512000059',id=42,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0z7xplik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:59Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.204 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.205 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.205 2 DEBUG os_vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap335bf3a1-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.213 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap335bf3a1-e2, col_values=(('external_ids', {'iface-id': '335bf3a1-e291-4896-a8b2-523eb372ebd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:9c:b8', 'vm-uuid': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:04 np0005466030 NetworkManager[44960]: <info>  [1759407604.2151] manager: (tap335bf3a1-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.225 2 INFO os_vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2')#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.284 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.284 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.285 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:39:9c:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.285 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Using config drive#033[00m
Oct  2 08:20:04 np0005466030 nova_compute[230518]: 2025-10-02 12:20:04.312 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:05.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:05 np0005466030 nova_compute[230518]: 2025-10-02 12:20:05.818 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Creating config drive at /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config#033[00m
Oct  2 08:20:05 np0005466030 nova_compute[230518]: 2025-10-02 12:20:05.824 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp988o49aw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:05 np0005466030 nova_compute[230518]: 2025-10-02 12:20:05.961 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp988o49aw" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:05 np0005466030 nova_compute[230518]: 2025-10-02 12:20:05.999 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.003 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.472 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.473 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deleting local config drive /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config because it was imported into RBD.#033[00m
Oct  2 08:20:06 np0005466030 virtqemud[230067]: End of file while reading data: Input/output error
Oct  2 08:20:06 np0005466030 virtqemud[230067]: End of file while reading data: Input/output error
Oct  2 08:20:06 np0005466030 kernel: tap335bf3a1-e2: entered promiscuous mode
Oct  2 08:20:06 np0005466030 NetworkManager[44960]: <info>  [1759407606.5399] manager: (tap335bf3a1-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:06Z|00174|binding|INFO|Claiming lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 for this chassis.
Oct  2 08:20:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:06Z|00175|binding|INFO|335bf3a1-e291-4896-a8b2-523eb372ebd6: Claiming fa:16:3e:39:9c:b8 10.100.0.10
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.557 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.560 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.562 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671#033[00m
Oct  2 08:20:06 np0005466030 systemd-udevd[247456]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49ee8b8a-0574-402f-9d74-2de9bd5141d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.581 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.586 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6ba4e9-644b-4916-a5b6-b626e3d4a788]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2129728c-7bd2-4104-afd2-64eefe59fe9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 systemd-machined[188247]: New machine qemu-22-instance-0000002a.
Oct  2 08:20:06 np0005466030 NetworkManager[44960]: <info>  [1759407606.6010] device (tap335bf3a1-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:20:06 np0005466030 NetworkManager[44960]: <info>  [1759407606.6025] device (tap335bf3a1-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.603 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3e06c6df-62e9-4023-969b-7f82ee5611f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 systemd[1]: Started Virtual Machine qemu-22-instance-0000002a.
Oct  2 08:20:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:06Z|00176|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 ovn-installed in OVS
Oct  2 08:20:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:06Z|00177|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 up in Southbound
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.656 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0b5feb-68c9-4efb-bd32-3e17bfd4da75]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.683 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc17429-5225-4e07-bbf8-8d94bcad76fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.689 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de1b4535-ef4d-41b5-9ab2-95ee079acfd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 NetworkManager[44960]: <info>  [1759407606.6910] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.727 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a052b617-287d-4ab3-af32-dbd04bdc9975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.729 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[81046983-4baa-4ad4-bacf-26ff44741df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 NetworkManager[44960]: <info>  [1759407606.7527] device (tapd68ff9e0-a0): carrier: link connected
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.758 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f604bd8d-1173-42aa-990d-3a0654d2880b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.776 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91f68845-7ea1-4ac0-8f20-0f36f5c0c269]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547031, 'reachable_time': 22913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247490, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.791 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd83b86-8054-4430-8f3f-81367c28d109]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547031, 'tstamp': 547031}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247491, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.810 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f6cfac97-b839-4435-8ba4-06610d1c0a67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547031, 'reachable_time': 22913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247492, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.839 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e084cbef-0454-4112-b5d3-785f8c3bc30c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.862 2 DEBUG nova.network.neutron [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updated VIF entry in instance network info cache for port 335bf3a1-e291-4896-a8b2-523eb372ebd6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.862 2 DEBUG nova.network.neutron [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.892 2 DEBUG oslo_concurrency.lockutils [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.896 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d27444-8323-46ad-a646-45efbb3d716f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.897 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.898 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.898 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:06 np0005466030 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct  2 08:20:06 np0005466030 NetworkManager[44960]: <info>  [1759407606.9008] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.903 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:06Z|00178|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=0)
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:06 np0005466030 nova_compute[230518]: 2025-10-02 12:20:06.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.923 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.924 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a57268aa-3e7e-4a3f-876a-57faa6a7a9ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.925 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:20:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.926 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.049 2 DEBUG nova.compute.manager [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.049 2 DEBUG oslo_concurrency.lockutils [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.050 2 DEBUG oslo_concurrency.lockutils [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.050 2 DEBUG oslo_concurrency.lockutils [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.050 2 DEBUG nova.compute.manager [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Processing event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:20:07 np0005466030 podman[247543]: 2025-10-02 12:20:07.31126748 +0000 UTC m=+0.067184807 container create 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:20:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:07.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:20:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:07.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:20:07 np0005466030 systemd[1]: Started libpod-conmon-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope.
Oct  2 08:20:07 np0005466030 podman[247543]: 2025-10-02 12:20:07.267513291 +0000 UTC m=+0.023430638 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:20:07 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:20:07 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0694d206f98a051cefdbfb537752832811b123300f6179fbe3f8a35b3a48a40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:20:07 np0005466030 podman[247543]: 2025-10-02 12:20:07.417753386 +0000 UTC m=+0.173670733 container init 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:20:07 np0005466030 podman[247543]: 2025-10-02 12:20:07.425581423 +0000 UTC m=+0.181498760 container start 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:20:07 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : New worker (247624) forked
Oct  2 08:20:07 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : Loading success.
Oct  2 08:20:07 np0005466030 podman[247560]: 2025-10-02 12:20:07.457832508 +0000 UTC m=+0.087055533 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:20:07 np0005466030 podman[247568]: 2025-10-02 12:20:07.458545211 +0000 UTC m=+0.082371116 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.935 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407607.9349484, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.935 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Started (Lifecycle Event)#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.937 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.941 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.945 2 INFO nova.virt.libvirt.driver [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance spawned successfully.#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.945 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.968 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.974 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.979 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.979 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.980 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.980 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.981 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:07 np0005466030 nova_compute[230518]: 2025-10-02 12:20:07.981 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.011 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.011 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407607.935077, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.012 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.039 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.042 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407607.9405236, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.042 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.049 2 INFO nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 8.82 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.050 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.082 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.085 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.115 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.140 2 INFO nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 9.81 seconds to build instance.#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.161 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:08 np0005466030 nova_compute[230518]: 2025-10-02 12:20:08.763 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.219 2 DEBUG nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.219 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.219 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.220 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.220 2 DEBUG nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] No waiting events found dispatching network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.220 2 WARNING nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received unexpected event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:20:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:09.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:09.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.718 2 INFO nova.compute.manager [None req-ce4f0af3-22ed-4d73-ad5b-3699cfa2f252 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Pausing#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.719 2 DEBUG nova.objects.instance [None req-ce4f0af3-22ed-4d73-ad5b-3699cfa2f252 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'flavor' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.760 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407609.760153, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.760 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.762 2 DEBUG nova.compute.manager [None req-ce4f0af3-22ed-4d73-ad5b-3699cfa2f252 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.818 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.821 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:09 np0005466030 nova_compute[230518]: 2025-10-02 12:20:09.866 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 08:20:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:11.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:12 np0005466030 nova_compute[230518]: 2025-10-02 12:20:12.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:12 np0005466030 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct  2 08:20:12 np0005466030 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000027.scope: Consumed 14.110s CPU time.
Oct  2 08:20:12 np0005466030 systemd-machined[188247]: Machine qemu-21-instance-00000027 terminated.
Oct  2 08:20:12 np0005466030 nova_compute[230518]: 2025-10-02 12:20:12.792 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance shutdown successfully after 14 seconds.#033[00m
Oct  2 08:20:12 np0005466030 nova_compute[230518]: 2025-10-02 12:20:12.797 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.#033[00m
Oct  2 08:20:12 np0005466030 nova_compute[230518]: 2025-10-02 12:20:12.801 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.#033[00m
Oct  2 08:20:12 np0005466030 nova_compute[230518]: 2025-10-02 12:20:12.821 2 DEBUG nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:12 np0005466030 nova_compute[230518]: 2025-10-02 12:20:12.906 2 INFO nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] instance snapshotting#033[00m
Oct  2 08:20:12 np0005466030 nova_compute[230518]: 2025-10-02 12:20:12.907 2 WARNING nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Oct  2 08:20:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:13.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:13.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.361 2 INFO nova.virt.libvirt.driver [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Beginning live snapshot process#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.564 2 DEBUG nova.virt.libvirt.imagebackend [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.609 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting instance files /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.610 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deletion of /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del complete#033[00m
Oct  2 08:20:13 np0005466030 podman[247861]: 2025-10-02 12:20:13.716412314 +0000 UTC m=+0.075672796 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.759 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.760 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating image(s)#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.785 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:13 np0005466030 podman[247861]: 2025-10-02 12:20:13.81404401 +0000 UTC m=+0.173304492 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.827 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.864 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.872 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.873 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:13 np0005466030 nova_compute[230518]: 2025-10-02 12:20:13.879 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(4f69a8e44b9541c089f1a91e2f5a3282) on rbd image(8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:14 np0005466030 nova_compute[230518]: 2025-10-02 12:20:14.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:14 np0005466030 nova_compute[230518]: 2025-10-02 12:20:14.220 2 DEBUG nova.virt.libvirt.imagebackend [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:20:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Oct  2 08:20:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.054 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning vms/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk@4f69a8e44b9541c089f1a91e2f5a3282 to images/15e18cef-aa4b-424b-82b1-3a0edfa62ef7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:20:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:15.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:15.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.401 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] flattening images/15e18cef-aa4b-424b-82b1-3a0edfa62ef7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.480 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.563 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.564 2 DEBUG nova.virt.images [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] 52ef509e-0e22-464e-93c9-3ddcf574cd64 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.576 2 DEBUG nova.privsep.utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.576 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.767 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(4f69a8e44b9541c089f1a91e2f5a3282) on rbd image(8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.809 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.813 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.888 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.890 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.918 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:15 np0005466030 nova_compute[230518]: 2025-10-02 12:20:15.925 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.296 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.365 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] resizing rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.523 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.523 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ensure instance console log exists: /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.524 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.524 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.524 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.526 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.530 2 WARNING nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.537 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.537 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.540 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.540 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.541 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.544 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.544 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.564 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Oct  2 08:20:16 np0005466030 nova_compute[230518]: 2025-10-02 12:20:16.923 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(snap) on rbd image(15e18cef-aa4b-424b-82b1-3a0edfa62ef7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/405233859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.021 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.048 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.052 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:17.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:17.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3353938704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.614 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.617 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <uuid>94bf2d68-bf2c-4720-8ede-688ca2b48ce6</uuid>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <name>instance-00000027</name>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersAdmin275Test-server-2065786220</nova:name>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:20:16</nova:creationTime>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <nova:user uuid="00254a66d4364bc0b5d187d008ba5a9a">tempest-ServersAdmin275Test-1864943547-project-member</nova:user>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <nova:project uuid="b1871b72e3494da299605236b73c241f">tempest-ServersAdmin275Test-1864943547</nova:project>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <entry name="serial">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <entry name="uuid">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log" append="off"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:20:17 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:20:17 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:20:17 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:20:17 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.684 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.684 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.685 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Using config drive#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.711 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.731 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.757 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'keypairs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.960 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating config drive at /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config#033[00m
Oct  2 08:20:17 np0005466030 nova_compute[230518]: 2025-10-02 12:20:17.964 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps64v96zx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:18 np0005466030 nova_compute[230518]: 2025-10-02 12:20:18.094 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps64v96zx" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:18 np0005466030 nova_compute[230518]: 2025-10-02 12:20:18.122 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:18 np0005466030 nova_compute[230518]: 2025-10-02 12:20:18.126 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:18 np0005466030 nova_compute[230518]: 2025-10-02 12:20:18.351 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:18 np0005466030 nova_compute[230518]: 2025-10-02 12:20:18.352 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting local config drive /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:20:18 np0005466030 systemd-machined[188247]: New machine qemu-23-instance-00000027.
Oct  2 08:20:18 np0005466030 systemd[1]: Started Virtual Machine qemu-23-instance-00000027.
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.267 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.267 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407619.2667367, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.268 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.270 2 DEBUG nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.271 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.274 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance spawned successfully.#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.275 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.293 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.299 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.303 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.303 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.304 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.304 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.305 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.306 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.319 2 INFO nova.virt.libvirt.driver [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Snapshot image upload complete#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.320 2 INFO nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 6.41 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.337 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.338 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407619.2676768, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.338 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:20:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:19.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.374 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.377 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.404 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.409 2 DEBUG nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.460 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.461 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.461 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:20:19 np0005466030 nova_compute[230518]: 2025-10-02 12:20:19.515 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:20 np0005466030 nova_compute[230518]: 2025-10-02 12:20:20.786 2 INFO nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Rebuilding instance#033[00m
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.108 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.284 2 DEBUG nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:21.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.376 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.391 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.420 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'resources' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.521 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.551 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:20:21 np0005466030 nova_compute[230518]: 2025-10-02 12:20:21.554 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.094 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.095 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.095 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.095 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.096 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.097 2 INFO nova.compute.manager [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Terminating instance#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.098 2 DEBUG nova.compute.manager [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:20:22 np0005466030 kernel: tap335bf3a1-e2 (unregistering): left promiscuous mode
Oct  2 08:20:22 np0005466030 NetworkManager[44960]: <info>  [1759407622.1480] device (tap335bf3a1-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00179|binding|INFO|Releasing lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 from this chassis (sb_readonly=0)
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00180|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 down in Southbound
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00181|binding|INFO|Removing iface tap335bf3a1-e2 ovn-installed in OVS
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.200 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.201 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.202 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.203 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[519e5f5d-090a-4ac9-bcf6-ff341ef5c43b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.203 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore#033[00m
Oct  2 08:20:22 np0005466030 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct  2 08:20:22 np0005466030 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002a.scope: Consumed 2.886s CPU time.
Oct  2 08:20:22 np0005466030 systemd-machined[188247]: Machine qemu-22-instance-0000002a terminated.
Oct  2 08:20:22 np0005466030 kernel: tap335bf3a1-e2: entered promiscuous mode
Oct  2 08:20:22 np0005466030 NetworkManager[44960]: <info>  [1759407622.3194] manager: (tap335bf3a1-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00182|binding|INFO|Claiming lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 for this chassis.
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00183|binding|INFO|335bf3a1-e291-4896-a8b2-523eb372ebd6: Claiming fa:16:3e:39:9c:b8 10.100.0.10
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 kernel: tap335bf3a1-e2 (unregistering): left promiscuous mode
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00184|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 ovn-installed in OVS
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00185|if_status|INFO|Dropped 6 log messages in last 86 seconds (most recently, 86 seconds ago) due to excessive rate
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00186|if_status|INFO|Not setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 down as sb is readonly
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:20:22Z|00187|binding|INFO|Releasing lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 from this chassis (sb_readonly=0)
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.353 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.360 2 INFO nova.virt.libvirt.driver [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance destroyed successfully.#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.361 2 DEBUG nova.objects.instance [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : haproxy version is 2.8.14-c23fe91
Oct  2 08:20:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : path to executable is /usr/sbin/haproxy
Oct  2 08:20:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [WARNING]  (247611) : Exiting Master process...
Oct  2 08:20:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [ALERT]    (247611) : Current worker (247624) exited with code 143 (Terminated)
Oct  2 08:20:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [WARNING]  (247611) : All workers exited. Exiting... (0)
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 systemd[1]: libpod-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope: Deactivated successfully.
Oct  2 08:20:22 np0005466030 conmon[247566]: conmon 6a407f851b886758ee1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope/container/memory.events
Oct  2 08:20:22 np0005466030 podman[248601]: 2025-10-02 12:20:22.38366816 +0000 UTC m=+0.078283448 container died 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:20:22 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc-userdata-shm.mount: Deactivated successfully.
Oct  2 08:20:22 np0005466030 systemd[1]: var-lib-containers-storage-overlay-e0694d206f98a051cefdbfb537752832811b123300f6179fbe3f8a35b3a48a40-merged.mount: Deactivated successfully.
Oct  2 08:20:22 np0005466030 podman[248601]: 2025-10-02 12:20:22.432043804 +0000 UTC m=+0.126659092 container cleanup 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:20:22 np0005466030 systemd[1]: libpod-conmon-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope: Deactivated successfully.
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.453 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.458 2 DEBUG nova.virt.libvirt.vif [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1512000059',display_name='tempest-ImagesTestJSON-server-1512000059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1512000059',id=42,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0z7xplik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:19Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.459 2 DEBUG nova.network.os_vif_util [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.460 2 DEBUG nova.network.os_vif_util [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.460 2 DEBUG os_vif [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.462 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap335bf3a1-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:22 np0005466030 podman[248640]: 2025-10-02 12:20:22.501752871 +0000 UTC m=+0.046270899 container remove 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.513 2 INFO os_vif [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2')#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.513 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e73f2c28-40b5-4472-a977-ae5c269c1b48]: (4, ('Thu Oct  2 12:20:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc)\n6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc\nThu Oct  2 12:20:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc)\n6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.514 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6edc6f4b-39af-4ebd-aff5-db77a919268d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.515 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:20:22 np0005466030 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.535 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7af132c-75dc-48f8-9776-2f3ccdd37936]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.566 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[52c6fac4-772b-4af5-9d1d-3ead9defefc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.568 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb08769-7661-41be-bf0f-1f85054d9115]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76c790db-cbf5-49e1-96f6-bc246e181553]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547023, 'reachable_time': 33919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248671, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.594 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.594 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[75eff08a-52b7-4b26-b7c8-6117ef1b2309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.596 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.597 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.597 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d9c781-0a50-49cc-903e-fb0ba46542e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.598 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.599 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:20:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.599 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7b07b1-7f19-402b-a161-b1feaf2e9bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.637 2 DEBUG nova.compute.manager [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-unplugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.638 2 DEBUG oslo_concurrency.lockutils [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.638 2 DEBUG oslo_concurrency.lockutils [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.639 2 DEBUG oslo_concurrency.lockutils [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.639 2 DEBUG nova.compute.manager [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] No waiting events found dispatching network-vif-unplugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:20:22 np0005466030 nova_compute[230518]: 2025-10-02 12:20:22.639 2 DEBUG nova.compute.manager [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-unplugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:20:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:23 np0005466030 nova_compute[230518]: 2025-10-02 12:20:23.114 2 INFO nova.virt.libvirt.driver [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deleting instance files /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_del#033[00m
Oct  2 08:20:23 np0005466030 nova_compute[230518]: 2025-10-02 12:20:23.115 2 INFO nova.virt.libvirt.driver [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deletion of /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_del complete#033[00m
Oct  2 08:20:23 np0005466030 nova_compute[230518]: 2025-10-02 12:20:23.220 2 INFO nova.compute.manager [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:20:23 np0005466030 nova_compute[230518]: 2025-10-02 12:20:23.221 2 DEBUG oslo.service.loopingcall [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:20:23 np0005466030 nova_compute[230518]: 2025-10-02 12:20:23.221 2 DEBUG nova.compute.manager [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:20:23 np0005466030 nova_compute[230518]: 2025-10-02 12:20:23.222 2 DEBUG nova.network.neutron [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:20:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:23.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.206 2 DEBUG nova.network.neutron [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.297 2 DEBUG nova.compute.manager [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-deleted-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.298 2 INFO nova.compute.manager [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Neutron deleted interface 335bf3a1-e291-4896-a8b2-523eb372ebd6; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.298 2 DEBUG nova.network.neutron [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.301 2 INFO nova.compute.manager [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 1.08 seconds to deallocate network for instance.#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.334 2 DEBUG nova.compute.manager [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Detach interface failed, port_id=335bf3a1-e291-4896-a8b2-523eb372ebd6, reason: Instance 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.370 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.370 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.388 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.405 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.406 2 DEBUG nova.compute.provider_tree [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.421 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.445 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.502 2 DEBUG oslo_concurrency.processutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:24 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:24 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.726 2 DEBUG nova.compute.manager [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.728 2 DEBUG oslo_concurrency.lockutils [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.728 2 DEBUG oslo_concurrency.lockutils [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.728 2 DEBUG oslo_concurrency.lockutils [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.729 2 DEBUG nova.compute.manager [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] No waiting events found dispatching network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.729 2 WARNING nova.compute.manager [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received unexpected event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:20:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/13144440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.973 2 DEBUG oslo_concurrency.processutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:24 np0005466030 nova_compute[230518]: 2025-10-02 12:20:24.980 2 DEBUG nova.compute.provider_tree [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:25 np0005466030 nova_compute[230518]: 2025-10-02 12:20:25.003 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:25 np0005466030 nova_compute[230518]: 2025-10-02 12:20:25.027 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:25 np0005466030 nova_compute[230518]: 2025-10-02 12:20:25.051 2 INFO nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8#033[00m
Oct  2 08:20:25 np0005466030 nova_compute[230518]: 2025-10-02 12:20:25.115 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Oct  2 08:20:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:25.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:25.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:25.919 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:25.919 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:20:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:27 np0005466030 nova_compute[230518]: 2025-10-02 12:20:27.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:27.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:27.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:27 np0005466030 nova_compute[230518]: 2025-10-02 12:20:27.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:28 np0005466030 podman[248747]: 2025-10-02 12:20:28.816320288 +0000 UTC m=+0.063854752 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:20:29 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  2 08:20:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:29.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:20:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:20:29 np0005466030 podman[248765]: 2025-10-02 12:20:29.855418649 +0000 UTC m=+0.105224207 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 08:20:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Oct  2 08:20:31 np0005466030 nova_compute[230518]: 2025-10-02 12:20:31.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:31 np0005466030 nova_compute[230518]: 2025-10-02 12:20:31.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:20:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:31.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:31 np0005466030 nova_compute[230518]: 2025-10-02 12:20:31.604 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:20:32 np0005466030 nova_compute[230518]: 2025-10-02 12:20:32.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:32 np0005466030 nova_compute[230518]: 2025-10-02 12:20:32.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:33.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:35.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:20:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:35.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:20:36 np0005466030 nova_compute[230518]: 2025-10-02 12:20:36.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:37 np0005466030 nova_compute[230518]: 2025-10-02 12:20:37.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:37 np0005466030 nova_compute[230518]: 2025-10-02 12:20:37.341 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407622.3388286, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:37 np0005466030 nova_compute[230518]: 2025-10-02 12:20:37.342 2 INFO nova.compute.manager [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:20:37 np0005466030 nova_compute[230518]: 2025-10-02 12:20:37.378 2 DEBUG nova.compute.manager [None req-e579a07e-ad87-44a3-a525-40f7e69dbe81 - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:37.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:37.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:37 np0005466030 nova_compute[230518]: 2025-10-02 12:20:37.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:37 np0005466030 podman[248792]: 2025-10-02 12:20:37.803270267 +0000 UTC m=+0.053918099 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 08:20:37 np0005466030 podman[248791]: 2025-10-02 12:20:37.803641739 +0000 UTC m=+0.054312452 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:20:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Oct  2 08:20:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.137 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.137 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.138 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.138 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.222 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.222 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.223 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.223 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:20:39 np0005466030 nova_compute[230518]: 2025-10-02 12:20:39.224 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:39.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:40 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3271158754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.073 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.849s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.450 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.451 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.591 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.592 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4520MB free_disk=20.876529693603516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.838 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.839 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:20:40 np0005466030 nova_compute[230518]: 2025-10-02 12:20:40.839 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:20:41 np0005466030 nova_compute[230518]: 2025-10-02 12:20:41.067 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:20:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/672575832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:20:41 np0005466030 nova_compute[230518]: 2025-10-02 12:20:41.560 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:41 np0005466030 nova_compute[230518]: 2025-10-02 12:20:41.567 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:20:41 np0005466030 nova_compute[230518]: 2025-10-02 12:20:41.583 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:20:41 np0005466030 nova_compute[230518]: 2025-10-02 12:20:41.606 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:20:41 np0005466030 nova_compute[230518]: 2025-10-02 12:20:41.607 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:42 np0005466030 nova_compute[230518]: 2025-10-02 12:20:42.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:42 np0005466030 nova_compute[230518]: 2025-10-02 12:20:42.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:42 np0005466030 nova_compute[230518]: 2025-10-02 12:20:42.521 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:42 np0005466030 nova_compute[230518]: 2025-10-02 12:20:42.522 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:42 np0005466030 nova_compute[230518]: 2025-10-02 12:20:42.522 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:20:42 np0005466030 nova_compute[230518]: 2025-10-02 12:20:42.647 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:20:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:20:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:43.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:43.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.482 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:20:43 np0005466030 nova_compute[230518]: 2025-10-02 12:20:43.952 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:44 np0005466030 nova_compute[230518]: 2025-10-02 12:20:44.002 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:44 np0005466030 nova_compute[230518]: 2025-10-02 12:20:44.002 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:20:44 np0005466030 nova_compute[230518]: 2025-10-02 12:20:44.003 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:44 np0005466030 nova_compute[230518]: 2025-10-02 12:20:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Oct  2 08:20:45 np0005466030 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct  2 08:20:45 np0005466030 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000027.scope: Consumed 13.815s CPU time.
Oct  2 08:20:45 np0005466030 systemd-machined[188247]: Machine qemu-23-instance-00000027 terminated.
Oct  2 08:20:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:45.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:45.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:45 np0005466030 nova_compute[230518]: 2025-10-02 12:20:45.660 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance shutdown successfully after 24 seconds.#033[00m
Oct  2 08:20:45 np0005466030 nova_compute[230518]: 2025-10-02 12:20:45.668 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.#033[00m
Oct  2 08:20:45 np0005466030 nova_compute[230518]: 2025-10-02 12:20:45.674 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.#033[00m
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.164888) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646164917, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2669, "num_deletes": 504, "total_data_size": 5540734, "memory_usage": 5620848, "flush_reason": "Manual Compaction"}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646178765, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3266205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28166, "largest_seqno": 30830, "table_properties": {"data_size": 3256394, "index_size": 5601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25651, "raw_average_key_size": 20, "raw_value_size": 3233993, "raw_average_value_size": 2583, "num_data_blocks": 243, "num_entries": 1252, "num_filter_entries": 1252, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407459, "oldest_key_time": 1759407459, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 13932 microseconds, and 6620 cpu microseconds.
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.178819) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3266205 bytes OK
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.178837) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.179822) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.179835) EVENT_LOG_v1 {"time_micros": 1759407646179831, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.179851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5527907, prev total WAL file size 5527907, number of live WAL files 2.
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.181159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3189KB)], [57(10MB)]
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646181232, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 14072578, "oldest_snapshot_seqno": -1}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5409 keys, 8630159 bytes, temperature: kUnknown
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646288130, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 8630159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8594211, "index_size": 21310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138035, "raw_average_key_size": 25, "raw_value_size": 8497034, "raw_average_value_size": 1570, "num_data_blocks": 857, "num_entries": 5409, "num_filter_entries": 5409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.288516) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8630159 bytes
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.290096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.4 rd, 80.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(7.0) write-amplify(2.6) OK, records in: 6417, records dropped: 1008 output_compression: NoCompression
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.290114) EVENT_LOG_v1 {"time_micros": 1759407646290106, "job": 34, "event": "compaction_finished", "compaction_time_micros": 107125, "compaction_time_cpu_micros": 39191, "output_level": 6, "num_output_files": 1, "total_output_size": 8630159, "num_input_records": 6417, "num_output_records": 5409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646291109, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646293168, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.181051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:46 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:47.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.454 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting instance files /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.456 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deletion of /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del complete#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.755 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.756 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating image(s)#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.786 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.813 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.837 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.841 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.900 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.901 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.901 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.901 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.930 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:47 np0005466030 nova_compute[230518]: 2025-10-02 12:20:47.934 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.607 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.678 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] resizing rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.950 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.950 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ensure instance console log exists: /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.951 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.951 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.952 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.953 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.958 2 WARNING nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.965 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.966 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.973 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.974 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.975 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.975 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.975 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:20:48 np0005466030 nova_compute[230518]: 2025-10-02 12:20:48.978 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:49 np0005466030 nova_compute[230518]: 2025-10-02 12:20:49.042 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:49.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1483081823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:49 np0005466030 nova_compute[230518]: 2025-10-02 12:20:49.516 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:49 np0005466030 nova_compute[230518]: 2025-10-02 12:20:49.547 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:49 np0005466030 nova_compute[230518]: 2025-10-02 12:20:49.550 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:20:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4104050532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:20:49 np0005466030 nova_compute[230518]: 2025-10-02 12:20:49.988 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:49 np0005466030 nova_compute[230518]: 2025-10-02 12:20:49.992 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <uuid>94bf2d68-bf2c-4720-8ede-688ca2b48ce6</uuid>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <name>instance-00000027</name>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersAdmin275Test-server-2065786220</nova:name>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:20:48</nova:creationTime>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <nova:user uuid="00254a66d4364bc0b5d187d008ba5a9a">tempest-ServersAdmin275Test-1864943547-project-member</nova:user>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <nova:project uuid="b1871b72e3494da299605236b73c241f">tempest-ServersAdmin275Test-1864943547</nova:project>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <entry name="serial">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <entry name="uuid">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log" append="off"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:20:49 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:20:49 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:20:49 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:20:49 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.167 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.167 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.168 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Using config drive#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.195 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.227 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.298 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'keypairs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.562 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating config drive at /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.571 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw9ft6f_m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.703 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw9ft6f_m" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.739 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.742 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.918 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:20:50 np0005466030 nova_compute[230518]: 2025-10-02 12:20:50.919 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting local config drive /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config because it was imported into RBD.#033[00m
Oct  2 08:20:50 np0005466030 systemd-machined[188247]: New machine qemu-24-instance-00000027.
Oct  2 08:20:50 np0005466030 systemd[1]: Started Virtual Machine qemu-24-instance-00000027.
Oct  2 08:20:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:51.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:51 np0005466030 nova_compute[230518]: 2025-10-02 12:20:51.642 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:20:51 np0005466030 nova_compute[230518]: 2025-10-02 12:20:51.643 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407651.6414962, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:51 np0005466030 nova_compute[230518]: 2025-10-02 12:20:51.644 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:20:51 np0005466030 nova_compute[230518]: 2025-10-02 12:20:51.650 2 DEBUG nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:20:51 np0005466030 nova_compute[230518]: 2025-10-02 12:20:51.650 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:20:51 np0005466030 nova_compute[230518]: 2025-10-02 12:20:51.655 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance spawned successfully.#033[00m
Oct  2 08:20:51 np0005466030 nova_compute[230518]: 2025-10-02 12:20:51.656 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.133 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.139 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.139 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.140 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.140 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.141 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.141 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.145 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.173 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.314 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.314 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407651.642726, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.314 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Started (Lifecycle Event)#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.881 2 DEBUG nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.885 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.889 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:20:52 np0005466030 nova_compute[230518]: 2025-10-02 12:20:52.995 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:20:53 np0005466030 nova_compute[230518]: 2025-10-02 12:20:53.018 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:53 np0005466030 nova_compute[230518]: 2025-10-02 12:20:53.018 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:53 np0005466030 nova_compute[230518]: 2025-10-02 12:20:53.019 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:20:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:53.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:53.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:53 np0005466030 nova_compute[230518]: 2025-10-02 12:20:53.810 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.411 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.412 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.413 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.413 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.413 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.414 2 INFO nova.compute.manager [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Terminating instance#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.415 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.415 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquired lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.415 2 DEBUG nova.network.neutron [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:20:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:55.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:55.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:55 np0005466030 nova_compute[230518]: 2025-10-02 12:20:55.898 2 DEBUG nova.network.neutron [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:20:56 np0005466030 nova_compute[230518]: 2025-10-02 12:20:56.266 2 DEBUG nova.network.neutron [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:20:56 np0005466030 nova_compute[230518]: 2025-10-02 12:20:56.418 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Releasing lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:20:56 np0005466030 nova_compute[230518]: 2025-10-02 12:20:56.419 2 DEBUG nova.compute.manager [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:20:56 np0005466030 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct  2 08:20:56 np0005466030 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000027.scope: Consumed 5.546s CPU time.
Oct  2 08:20:56 np0005466030 systemd-machined[188247]: Machine qemu-24-instance-00000027 terminated.
Oct  2 08:20:56 np0005466030 nova_compute[230518]: 2025-10-02 12:20:56.637 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.#033[00m
Oct  2 08:20:56 np0005466030 nova_compute[230518]: 2025-10-02 12:20:56.638 2 DEBUG nova.objects.instance [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'resources' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:20:57 np0005466030 nova_compute[230518]: 2025-10-02 12:20:57.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:20:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:57.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:20:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:57 np0005466030 nova_compute[230518]: 2025-10-02 12:20:57.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:20:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:20:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:59.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:20:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:20:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:59.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:20:59 np0005466030 podman[249262]: 2025-10-02 12:20:59.809709462 +0000 UTC m=+0.065664219 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:20:59 np0005466030 nova_compute[230518]: 2025-10-02 12:20:59.904 2 INFO nova.virt.libvirt.driver [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting instance files /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del#033[00m
Oct  2 08:20:59 np0005466030 nova_compute[230518]: 2025-10-02 12:20:59.905 2 INFO nova.virt.libvirt.driver [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deletion of /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del complete#033[00m
Oct  2 08:21:00 np0005466030 nova_compute[230518]: 2025-10-02 12:21:00.531 2 INFO nova.compute.manager [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 4.11 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:21:00 np0005466030 nova_compute[230518]: 2025-10-02 12:21:00.532 2 DEBUG oslo.service.loopingcall [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:21:00 np0005466030 nova_compute[230518]: 2025-10-02 12:21:00.532 2 DEBUG nova.compute.manager [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:21:00 np0005466030 nova_compute[230518]: 2025-10-02 12:21:00.533 2 DEBUG nova.network.neutron [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:21:00 np0005466030 nova_compute[230518]: 2025-10-02 12:21:00.639 2 DEBUG nova.network.neutron [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:21:00 np0005466030 podman[249281]: 2025-10-02 12:21:00.845190019 +0000 UTC m=+0.092360471 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 08:21:00 np0005466030 nova_compute[230518]: 2025-10-02 12:21:00.962 2 DEBUG nova.network.neutron [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.023 2 INFO nova.compute.manager [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 0.49 seconds to deallocate network for instance.#033[00m
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.161 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.162 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.220 2 DEBUG oslo_concurrency.processutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2306558928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.650 2 DEBUG oslo_concurrency.processutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.662 2 DEBUG nova.compute.provider_tree [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.697 2 DEBUG nova.scheduler.client.report [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:01 np0005466030 nova_compute[230518]: 2025-10-02 12:21:01.742 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:02 np0005466030 nova_compute[230518]: 2025-10-02 12:21:02.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:02 np0005466030 nova_compute[230518]: 2025-10-02 12:21:02.328 2 INFO nova.scheduler.client.report [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Deleted allocations for instance 94bf2d68-bf2c-4720-8ede-688ca2b48ce6#033[00m
Oct  2 08:21:02 np0005466030 nova_compute[230518]: 2025-10-02 12:21:02.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:02 np0005466030 nova_compute[230518]: 2025-10-02 12:21:02.625 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:21:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:03.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:21:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:21:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:21:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:21:04.323 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:21:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:21:04.324 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:21:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:21:04.325 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:21:04 np0005466030 nova_compute[230518]: 2025-10-02 12:21:04.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Oct  2 08:21:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:05.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:07 np0005466030 nova_compute[230518]: 2025-10-02 12:21:07.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:07.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:07 np0005466030 nova_compute[230518]: 2025-10-02 12:21:07.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:08 np0005466030 podman[249330]: 2025-10-02 12:21:08.798068444 +0000 UTC m=+0.049284384 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:21:08 np0005466030 podman[249329]: 2025-10-02 12:21:08.827450329 +0000 UTC m=+0.081945023 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:21:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:09.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Oct  2 08:21:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:11.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:11.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:11 np0005466030 nova_compute[230518]: 2025-10-02 12:21:11.636 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407656.635068, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:11 np0005466030 nova_compute[230518]: 2025-10-02 12:21:11.636 2 INFO nova.compute.manager [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:21:12 np0005466030 nova_compute[230518]: 2025-10-02 12:21:12.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:12 np0005466030 nova_compute[230518]: 2025-10-02 12:21:12.280 2 DEBUG nova.compute.manager [None req-ebc347f8-77a5-4453-9eed-a072155c3c5c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:12 np0005466030 nova_compute[230518]: 2025-10-02 12:21:12.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Oct  2 08:21:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:13.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:13.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Oct  2 08:21:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Oct  2 08:21:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:15.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:15.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:17 np0005466030 nova_compute[230518]: 2025-10-02 12:21:17.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:17.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:21:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:21:17 np0005466030 nova_compute[230518]: 2025-10-02 12:21:17.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:19.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Oct  2 08:21:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:21.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:21 np0005466030 ovn_controller[129257]: 2025-10-02T12:21:21Z|00188|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct  2 08:21:22 np0005466030 nova_compute[230518]: 2025-10-02 12:21:22.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:22 np0005466030 nova_compute[230518]: 2025-10-02 12:21:22.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:23.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:25.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:21:25.919 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:21:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:21:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:21:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:21:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:21:27 np0005466030 nova_compute[230518]: 2025-10-02 12:21:27.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Oct  2 08:21:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:27.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:27.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:27 np0005466030 nova_compute[230518]: 2025-10-02 12:21:27.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Oct  2 08:21:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:29.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:29.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:30 np0005466030 podman[249503]: 2025-10-02 12:21:30.799420119 +0000 UTC m=+0.055756918 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  2 08:21:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:21:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:21:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:31.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:31 np0005466030 podman[249520]: 2025-10-02 12:21:31.830238907 +0000 UTC m=+0.074070145 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.421 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.422 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.522 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.710 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.711 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.721 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:21:32 np0005466030 nova_compute[230518]: 2025-10-02 12:21:32.721 2 INFO nova.compute.claims [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:21:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:33 np0005466030 nova_compute[230518]: 2025-10-02 12:21:33.552 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1810387230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.015 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.021 2 DEBUG nova.compute.provider_tree [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.096 2 DEBUG nova.scheduler.client.report [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.178 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.179 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.337 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.338 2 DEBUG nova.network.neutron [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.424 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.444 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:21:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.609 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.611 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.612 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Creating image(s)#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.647 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.684 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.711 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.714 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.780 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.782 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.783 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.784 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.815 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:34 np0005466030 nova_compute[230518]: 2025-10-02 12:21:34.818 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.219 2 DEBUG nova.network.neutron [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.219 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.316 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.386 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] resizing rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:21:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:35.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.496 2 DEBUG nova.objects.instance [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'migration_context' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.522 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.523 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Ensure instance console log exists: /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.523 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.523 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.524 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.525 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.529 2 WARNING nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.533 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.534 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.538 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.538 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.540 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.540 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.542 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.542 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.542 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.543 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.543 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.543 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.547 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Oct  2 08:21:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2673187842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:35 np0005466030 nova_compute[230518]: 2025-10-02 12:21:35.991 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.017 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.021 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:21:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982999056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.462 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.465 2 DEBUG nova.objects.instance [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.670 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <uuid>3cc914dc-40b0-4808-aff8-bd8e0c6789b1</uuid>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <name>instance-0000002d</name>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-463301587</nova:name>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:21:35</nova:creationTime>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <nova:user uuid="6ce6b90597304cd29e06b1f1e62246eb">tempest-ServersAdminNegativeTestJSON-1444821380-project-member</nova:user>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <nova:project uuid="1ff6686454554253817cdb343c2f7e5e">tempest-ServersAdminNegativeTestJSON-1444821380</nova:project>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <entry name="serial">3cc914dc-40b0-4808-aff8-bd8e0c6789b1</entry>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <entry name="uuid">3cc914dc-40b0-4808-aff8-bd8e0c6789b1</entry>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/console.log" append="off"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:21:36 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:21:36 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:21:36 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:21:36 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.814 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.815 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Using config drive#033[00m
Oct  2 08:21:36 np0005466030 nova_compute[230518]: 2025-10-02 12:21:36.837 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.456 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Creating config drive at /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.468 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_2abr9u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.606 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_2abr9u" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.633 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.636 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.860 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:37 np0005466030 nova_compute[230518]: 2025-10-02 12:21:37.861 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deleting local config drive /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config because it was imported into RBD.#033[00m
Oct  2 08:21:37 np0005466030 systemd-machined[188247]: New machine qemu-25-instance-0000002d.
Oct  2 08:21:37 np0005466030 systemd[1]: Started Virtual Machine qemu-25-instance-0000002d.
Oct  2 08:21:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.715 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.716 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.716 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407698.714336, 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.717 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.722 2 INFO nova.virt.libvirt.driver [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance spawned successfully.#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.723 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.807 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.811 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.814 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.814 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.816 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.852 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.852 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407698.7155795, 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.852 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.962 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:38 np0005466030 nova_compute[230518]: 2025-10-02 12:21:38.964 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.025 2 INFO nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 4.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.025 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.034 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.168 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.170 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.247 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.247 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.248 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.298 2 INFO nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 6.62 seconds to build instance.#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.429 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.430 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.430 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.431 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.431 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:39.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.720 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:39 np0005466030 podman[249933]: 2025-10-02 12:21:39.833190913 +0000 UTC m=+0.068475179 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:21:39 np0005466030 podman[249934]: 2025-10-02 12:21:39.837263171 +0000 UTC m=+0.072104523 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:21:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/474486150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:39 np0005466030 nova_compute[230518]: 2025-10-02 12:21:39.964 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.179 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.179 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.317 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.319 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4658MB free_disk=20.902359008789062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.319 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.319 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.519 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.520 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.520 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:21:40 np0005466030 nova_compute[230518]: 2025-10-02 12:21:40.563 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:21:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:21:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3520970729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:41 np0005466030 nova_compute[230518]: 2025-10-02 12:21:41.029 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:41 np0005466030 nova_compute[230518]: 2025-10-02 12:21:41.034 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:41.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:42 np0005466030 nova_compute[230518]: 2025-10-02 12:21:42.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:42 np0005466030 nova_compute[230518]: 2025-10-02 12:21:42.549 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:42 np0005466030 nova_compute[230518]: 2025-10-02 12:21:42.633 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:21:42 np0005466030 nova_compute[230518]: 2025-10-02 12:21:42.633 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:42 np0005466030 nova_compute[230518]: 2025-10-02 12:21:42.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:43 np0005466030 nova_compute[230518]: 2025-10-02 12:21:43.438 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:43 np0005466030 nova_compute[230518]: 2025-10-02 12:21:43.439 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:43 np0005466030 nova_compute[230518]: 2025-10-02 12:21:43.439 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:21:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:44 np0005466030 nova_compute[230518]: 2025-10-02 12:21:44.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.230 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.231 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.231 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:45 np0005466030 nova_compute[230518]: 2025-10-02 12:21:45.721 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:21:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Oct  2 08:21:46 np0005466030 nova_compute[230518]: 2025-10-02 12:21:46.431 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:46 np0005466030 nova_compute[230518]: 2025-10-02 12:21:46.452 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:46 np0005466030 nova_compute[230518]: 2025-10-02 12:21:46.453 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:21:46 np0005466030 nova_compute[230518]: 2025-10-02 12:21:46.454 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.214 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.215 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.215 2 INFO nova.compute.manager [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Unshelving#033[00m
Oct  2 08:21:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:47.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.504 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.505 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.527 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'pci_requests' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.679 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'numa_topology' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.796 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:21:47 np0005466030 nova_compute[230518]: 2025-10-02 12:21:47.797 2 INFO nova.compute.claims [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:21:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.209 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:21:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:21:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/187165604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.618 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.626 2 DEBUG nova.compute.provider_tree [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.645 2 DEBUG nova.scheduler.client.report [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.719 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.977 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.978 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:21:48 np0005466030 nova_compute[230518]: 2025-10-02 12:21:48.979 2 DEBUG nova.network.neutron [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:21:49 np0005466030 nova_compute[230518]: 2025-10-02 12:21:49.282 2 DEBUG nova.network.neutron [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:21:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:49.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:49.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:49 np0005466030 nova_compute[230518]: 2025-10-02 12:21:49.649 2 DEBUG nova.network.neutron [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:21:49 np0005466030 nova_compute[230518]: 2025-10-02 12:21:49.886 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:21:49 np0005466030 nova_compute[230518]: 2025-10-02 12:21:49.888 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:21:49 np0005466030 nova_compute[230518]: 2025-10-02 12:21:49.889 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating image(s)#033[00m
Oct  2 08:21:49 np0005466030 nova_compute[230518]: 2025-10-02 12:21:49.932 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:49 np0005466030 nova_compute[230518]: 2025-10-02 12:21:49.937 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:51.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.535 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.566 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.569 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "4c7d010a3d79a57ee44b8c9ce07268c80a0ae9f5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.570 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "4c7d010a3d79a57ee44b8c9ce07268c80a0ae9f5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.816 2 DEBUG nova.virt.libvirt.imagebackend [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/45a729a3-bfb9-4ba4-a275-e4201ada93ed/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/45a729a3-bfb9-4ba4-a275-e4201ada93ed/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.868 2 DEBUG nova.virt.libvirt.imagebackend [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/45a729a3-bfb9-4ba4-a275-e4201ada93ed/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:21:52 np0005466030 nova_compute[230518]: 2025-10-02 12:21:52.869 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] cloning images/45a729a3-bfb9-4ba4-a275-e4201ada93ed@snap to None/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:21:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:53 np0005466030 nova_compute[230518]: 2025-10-02 12:21:53.345 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "4c7d010a3d79a57ee44b8c9ce07268c80a0ae9f5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:21:53 np0005466030 nova_compute[230518]: 2025-10-02 12:21:53.485 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'migration_context' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:21:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:53 np0005466030 nova_compute[230518]: 2025-10-02 12:21:53.559 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] flattening vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:21:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:55.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:57 np0005466030 nova_compute[230518]: 2025-10-02 12:21:57.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:21:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:57.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:57 np0005466030 nova_compute[230518]: 2025-10-02 12:21:57.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:21:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:21:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:21:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:21:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:21:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:21:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:59.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.171 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Image rbd:vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.172 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.173 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Ensure instance console log exists: /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.173 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.174 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.175 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.177 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:21:10Z,direct_url=<?>,disk_format='raw',id=45a729a3-bfb9-4ba4-a275-e4201ada93ed,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1498353913-shelved',owner='aaf2805394aa4c4cb7977f6433aabf56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:21:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.183 2 WARNING nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.189 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.190 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.193 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.194 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.196 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.196 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:21:10Z,direct_url=<?>,disk_format='raw',id=45a729a3-bfb9-4ba4-a275-e4201ada93ed,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1498353913-shelved',owner='aaf2805394aa4c4cb7977f6433aabf56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:21:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.197 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.197 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.198 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.198 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.199 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.199 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.200 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.201 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.201 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.202 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.202 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.224 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4012375347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.670 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.702 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:00 np0005466030 nova_compute[230518]: 2025-10-02 12:22:00.706 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3786116264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:01 np0005466030 podman[250344]: 2025-10-02 12:22:01.369185026 +0000 UTC m=+0.069651026 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.439 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.733s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.441 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'pci_devices' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.462 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <uuid>7beacac0-65ce-4e15-a73c-9b50a50f968e</uuid>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <name>instance-0000002b</name>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1498353913</nova:name>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:22:00</nova:creationTime>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <nova:user uuid="93167a5206ba42b28aa96a676d3edb6d">tempest-UnshelveToHostMultiNodesTest-2076784560-project-member</nova:user>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <nova:project uuid="aaf2805394aa4c4cb7977f6433aabf56">tempest-UnshelveToHostMultiNodesTest-2076784560</nova:project>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="45a729a3-bfb9-4ba4-a275-e4201ada93ed"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <entry name="serial">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <entry name="uuid">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log" append="off"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:22:01 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:22:01 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:22:01 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:22:01 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:22:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:01.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.564 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.565 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.565 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Using config drive#033[00m
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.595 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.647 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:01 np0005466030 nova_compute[230518]: 2025-10-02 12:22:01.726 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'keypairs' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.047 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating config drive at /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.053 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3c4kxln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.187 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3c4kxln" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.214 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.218 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.602 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.603 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting local config drive /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:22:02 np0005466030 systemd-machined[188247]: New machine qemu-26-instance-0000002b.
Oct  2 08:22:02 np0005466030 systemd[1]: Started Virtual Machine qemu-26-instance-0000002b.
Oct  2 08:22:02 np0005466030 nova_compute[230518]: 2025-10-02 12:22:02.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:02 np0005466030 podman[250430]: 2025-10-02 12:22:02.783692303 +0000 UTC m=+0.110930146 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:22:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.623 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407723.6224747, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.624 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.627 2 DEBUG nova.compute.manager [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.627 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.630 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance spawned successfully.#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.701 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.704 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.727 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.728 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407723.623446, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.728 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.791 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.795 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:03 np0005466030 nova_compute[230518]: 2025-10-02 12:22:03.822 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Oct  2 08:22:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:06 np0005466030 nova_compute[230518]: 2025-10-02 12:22:06.706 2 DEBUG nova.compute.manager [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:06 np0005466030 nova_compute[230518]: 2025-10-02 12:22:06.925 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:06 np0005466030 nova_compute[230518]: 2025-10-02 12:22:06.925 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:06 np0005466030 nova_compute[230518]: 2025-10-02 12:22:06.937 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 19.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.321 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.498 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.499 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.507 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.507 2 INFO nova.compute.claims [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:22:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.708 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:07 np0005466030 nova_compute[230518]: 2025-10-02 12:22:07.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.044 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.044 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.044 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.045 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.045 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.046 2 INFO nova.compute.manager [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Terminating instance#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.047 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.047 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.047 2 DEBUG nova.network.neutron [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:22:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3645296222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.262 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.270 2 DEBUG nova.compute.provider_tree [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.287 2 DEBUG nova.scheduler.client.report [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.310 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.311 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.372 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.372 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.437 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.578 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.636 2 INFO nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Booting with volume 0de00a00-9b68-498b-8bd0-88556bd22393 at /dev/vda#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.774 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.775 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.792 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.793 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9cfa27-6e38-4d79-bb05-9d5a1f7dac68]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.795 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.802 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.803 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3abb8055-d07b-49d8-b7cd-b2b43925f7e1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.805 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.812 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.813 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[79752506-19e1-486a-988a-7e9fad5f365e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.814 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7b50cb7e-9304-4e2f-8233-1209f83edb0d]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.815 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.847 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.850 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.851 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.851 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.851 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.852 2 DEBUG nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating existing volume attachment record: a17f6c4e-3b3e-4e2a-bc65-6e16cb06b24f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:22:08 np0005466030 nova_compute[230518]: 2025-10-02 12:22:08.994 2 DEBUG nova.network.neutron [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:09 np0005466030 nova_compute[230518]: 2025-10-02 12:22:09.448 2 DEBUG nova.network.neutron [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:09 np0005466030 nova_compute[230518]: 2025-10-02 12:22:09.476 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:09 np0005466030 nova_compute[230518]: 2025-10-02 12:22:09.477 2 DEBUG nova.compute.manager [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:22:09 np0005466030 nova_compute[230518]: 2025-10-02 12:22:09.511 2 DEBUG nova.policy [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b978e493dbdc419e864471708c90b0b4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:22:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:09.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:09 np0005466030 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct  2 08:22:09 np0005466030 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000002b.scope: Consumed 7.088s CPU time.
Oct  2 08:22:09 np0005466030 systemd-machined[188247]: Machine qemu-26-instance-0000002b terminated.
Oct  2 08:22:09 np0005466030 nova_compute[230518]: 2025-10-02 12:22:09.706 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.#033[00m
Oct  2 08:22:09 np0005466030 nova_compute[230518]: 2025-10-02 12:22:09.706 2 DEBUG nova.objects.instance [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'resources' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.093 2 INFO nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Booting with volume b93e6b37-6df0-4c49-81d5-526e5c68b542 at /dev/vdb#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.247 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.249 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.259 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.259 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[18f57f6f-9d5a-41a7-a838-fa7c85e970d2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.261 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.270 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.271 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e96118-ab7d-4fe5-a76c-e2a60f6a85a0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.272 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.280 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.280 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[27dcb7bb-48ef-4a79-bf76-d30c6a612b08]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.281 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a32cb5-38c4-4477-b60b-cded3170f55b]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.282 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.312 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.315 2 DEBUG nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating existing volume attachment record: eac04b36-4077-49c3-86b1-942e2b6eeb26 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:22:10 np0005466030 nova_compute[230518]: 2025-10-02 12:22:10.774 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: 6a13b8d9-269d-4176-b4c7-693a5e26e74b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:10 np0005466030 podman[250565]: 2025-10-02 12:22:10.815189058 +0000 UTC m=+0.065026309 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 08:22:10 np0005466030 podman[250566]: 2025-10-02 12:22:10.821107345 +0000 UTC m=+0.070879575 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 08:22:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078669458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.284 2 INFO nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Booting with volume f3584061-a34e-4c55-a201-ea9e5f60b3e5 at /dev/vdc#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.402 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.403 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.406 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: b84676b0-d376-4ced-99fb-08e677046d6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.415 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.416 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f8789545-b0f4-4230-9837-add9bbe2caa3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.417 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.424 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.424 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[53a222f6-c599-489f-93f3-46f3286d61ac]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.426 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.439 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.439 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2f4b23-ee17-4689-ad40-a98e694fa618]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.441 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a64a4f47-03ef-46ed-ba20-5978245e0a1d]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.441 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.467 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:22:11 np0005466030 nova_compute[230518]: 2025-10-02 12:22:11.472 2 DEBUG nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating existing volume attachment record: 54598788-e706-49d2-9e91-e968eea915b1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:22:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:11.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:11.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.374 2 INFO nova.virt.libvirt.driver [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting instance files /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del#033[00m
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.375 2 INFO nova.virt.libvirt.driver [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deletion of /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del complete#033[00m
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.651 2 INFO nova.compute.manager [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Took 3.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.652 2 DEBUG oslo.service.loopingcall [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.653 2 DEBUG nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.653 2 DEBUG nova.network.neutron [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:22:12 np0005466030 nova_compute[230518]: 2025-10-02 12:22:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Oct  2 08:22:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:13.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:13.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:13 np0005466030 nova_compute[230518]: 2025-10-02 12:22:13.736 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: c9731d13-4315-4bdc-9d24-a91ce1d8d427 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:13 np0005466030 nova_compute[230518]: 2025-10-02 12:22:13.992 2 DEBUG nova.network.neutron [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.101 2 DEBUG nova.network.neutron [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.204 2 INFO nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Took 1.55 seconds to deallocate network for instance.#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.329 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.330 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.351 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.352 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.352 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Creating image(s)#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Ensure instance console log exists: /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.354 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.413 2 DEBUG oslo_concurrency.processutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.647 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: 2ef879b2-3519-40b6-8207-d24b0e1a39de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1369805820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.850 2 DEBUG oslo_concurrency.processutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.859 2 DEBUG nova.compute.provider_tree [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.885 2 DEBUG nova.scheduler.client.report [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.924 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:14 np0005466030 nova_compute[230518]: 2025-10-02 12:22:14.961 2 INFO nova.scheduler.client.report [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Deleted allocations for instance 7beacac0-65ce-4e15-a73c-9b50a50f968e#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.088 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.299 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: af15c204-50a0-4b32-a3a7-46c9b925ec87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.390 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.392 2 INFO nova.compute.manager [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Terminating instance#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.393 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.393 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquired lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.393 2 DEBUG nova.network.neutron [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:22:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:15.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:15.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:15 np0005466030 nova_compute[230518]: 2025-10-02 12:22:15.602 2 DEBUG nova.network.neutron [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Oct  2 08:22:16 np0005466030 nova_compute[230518]: 2025-10-02 12:22:16.619 2 DEBUG nova.network.neutron [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:16 np0005466030 nova_compute[230518]: 2025-10-02 12:22:16.639 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Releasing lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:16 np0005466030 nova_compute[230518]: 2025-10-02 12:22:16.641 2 DEBUG nova.compute.manager [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:22:16 np0005466030 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct  2 08:22:16 np0005466030 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000002d.scope: Consumed 13.814s CPU time.
Oct  2 08:22:16 np0005466030 systemd-machined[188247]: Machine qemu-25-instance-0000002d terminated.
Oct  2 08:22:16 np0005466030 nova_compute[230518]: 2025-10-02 12:22:16.864 2 INFO nova.virt.libvirt.driver [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance destroyed successfully.#033[00m
Oct  2 08:22:16 np0005466030 nova_compute[230518]: 2025-10-02 12:22:16.865 2 DEBUG nova.objects.instance [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'resources' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.060 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 6a13b8d9-269d-4176-b4c7-693a5e26e74b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.161 2 DEBUG nova.compute.manager [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.162 2 DEBUG nova.compute.manager [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.162 2 DEBUG oslo_concurrency.lockutils [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.162 2 DEBUG oslo_concurrency.lockutils [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.163 2 DEBUG nova.network.neutron [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 6a13b8d9-269d-4176-b4c7-693a5e26e74b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:17.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:17.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:17 np0005466030 nova_compute[230518]: 2025-10-02 12:22:17.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.030 2 DEBUG nova.network.neutron [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.333 2 INFO nova.virt.libvirt.driver [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deleting instance files /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_del#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.334 2 INFO nova.virt.libvirt.driver [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deletion of /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_del complete#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.395 2 INFO nova.compute.manager [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 1.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.396 2 DEBUG oslo.service.loopingcall [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.396 2 DEBUG nova.compute.manager [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.396 2 DEBUG nova.network.neutron [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.548 2 DEBUG nova.network.neutron [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.567 2 DEBUG oslo_concurrency.lockutils [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.610 2 DEBUG nova.network.neutron [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:18.614 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:18.616 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.646 2 DEBUG nova.network.neutron [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.662 2 INFO nova.compute.manager [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 0.27 seconds to deallocate network for instance.#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.712 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.713 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.785 2 DEBUG oslo_concurrency.processutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Oct  2 08:22:18 np0005466030 nova_compute[230518]: 2025-10-02 12:22:18.963 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 25721468-4447-4fb7-97f7-e805e64f0267 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.251 2 DEBUG nova.compute.manager [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.252 2 DEBUG nova.compute.manager [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-25721468-4447-4fb7-97f7-e805e64f0267. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.252 2 DEBUG oslo_concurrency.lockutils [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.253 2 DEBUG oslo_concurrency.lockutils [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.253 2 DEBUG nova.network.neutron [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 25721468-4447-4fb7-97f7-e805e64f0267 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1670408849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.444 2 DEBUG oslo_concurrency.processutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.450 2 DEBUG nova.compute.provider_tree [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.457 2 DEBUG nova.network.neutron [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.466 2 DEBUG nova.scheduler.client.report [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.489 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.523 2 INFO nova.scheduler.client.report [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Deleted allocations for instance 3cc914dc-40b0-4808-aff8-bd8e0c6789b1#033[00m
Oct  2 08:22:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:19.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.594 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.850 2 DEBUG nova.network.neutron [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:19 np0005466030 nova_compute[230518]: 2025-10-02 12:22:19.867 2 DEBUG oslo_concurrency.lockutils [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:20 np0005466030 nova_compute[230518]: 2025-10-02 12:22:20.669 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Oct  2 08:22:21 np0005466030 nova_compute[230518]: 2025-10-02 12:22:21.497 2 DEBUG nova.compute.manager [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:21 np0005466030 nova_compute[230518]: 2025-10-02 12:22:21.497 2 DEBUG nova.compute.manager [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:21 np0005466030 nova_compute[230518]: 2025-10-02 12:22:21.498 2 DEBUG oslo_concurrency.lockutils [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:21 np0005466030 nova_compute[230518]: 2025-10-02 12:22:21.498 2 DEBUG oslo_concurrency.lockutils [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:21 np0005466030 nova_compute[230518]: 2025-10-02 12:22:21.499 2 DEBUG nova.network.neutron [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:21.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:21.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:21 np0005466030 nova_compute[230518]: 2025-10-02 12:22:21.721 2 DEBUG nova.network.neutron [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:21 np0005466030 nova_compute[230518]: 2025-10-02 12:22:21.836 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: b84676b0-d376-4ced-99fb-08e677046d6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:22 np0005466030 nova_compute[230518]: 2025-10-02 12:22:22.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:22 np0005466030 nova_compute[230518]: 2025-10-02 12:22:22.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:22 np0005466030 nova_compute[230518]: 2025-10-02 12:22:22.870 2 DEBUG nova.network.neutron [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:22 np0005466030 nova_compute[230518]: 2025-10-02 12:22:22.892 2 DEBUG oslo_concurrency.lockutils [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:23 np0005466030 nova_compute[230518]: 2025-10-02 12:22:23.076 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: c9731d13-4315-4bdc-9d24-a91ce1d8d427 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:23.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:23.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:23 np0005466030 nova_compute[230518]: 2025-10-02 12:22:23.679 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:23 np0005466030 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-b84676b0-d376-4ced-99fb-08e677046d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:23 np0005466030 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:23 np0005466030 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:23 np0005466030 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port b84676b0-d376-4ced-99fb-08e677046d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.039 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.704 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407729.6994617, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.704 2 INFO nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.745 2 DEBUG nova.compute.manager [None req-0a3308f2-0ef4-4810-aef3-c57faf98c7d6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.903 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.945 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.945 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.945 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-c9731d13-4315-4bdc-9d24-a91ce1d8d427. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.946 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.946 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:24 np0005466030 nova_compute[230518]: 2025-10-02 12:22:24.946 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port c9731d13-4315-4bdc-9d24-a91ce1d8d427 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:25 np0005466030 nova_compute[230518]: 2025-10-02 12:22:25.283 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:25 np0005466030 nova_compute[230518]: 2025-10-02 12:22:25.354 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 2ef879b2-3519-40b6-8207-d24b0e1a39de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:25.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:25 np0005466030 nova_compute[230518]: 2025-10-02 12:22:25.803 2 DEBUG nova.compute.manager [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:25 np0005466030 nova_compute[230518]: 2025-10-02 12:22:25.803 2 DEBUG nova.compute.manager [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-2ef879b2-3519-40b6-8207-d24b0e1a39de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:25 np0005466030 nova_compute[230518]: 2025-10-02 12:22:25.803 2 DEBUG oslo_concurrency.lockutils [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Oct  2 08:22:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:25.921 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:25.921 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:26 np0005466030 nova_compute[230518]: 2025-10-02 12:22:26.213 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:26 np0005466030 nova_compute[230518]: 2025-10-02 12:22:26.242 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:26 np0005466030 nova_compute[230518]: 2025-10-02 12:22:26.243 2 DEBUG oslo_concurrency.lockutils [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:26 np0005466030 nova_compute[230518]: 2025-10-02 12:22:26.243 2 DEBUG nova.network.neutron [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 2ef879b2-3519-40b6-8207-d24b0e1a39de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:26 np0005466030 nova_compute[230518]: 2025-10-02 12:22:26.466 2 DEBUG nova.network.neutron [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.023 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: af15c204-50a0-4b32-a3a7-46c9b925ec87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.075 2 DEBUG nova.network.neutron [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.088 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.119 2 DEBUG oslo_concurrency.lockutils [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.120 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.120 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.390 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:22:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:27.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:27.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:27.618 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.955 2 DEBUG nova.compute.manager [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.956 2 DEBUG nova.compute.manager [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-af15c204-50a0-4b32-a3a7-46c9b925ec87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:27 np0005466030 nova_compute[230518]: 2025-10-02 12:22:27.956 2 DEBUG oslo_concurrency.lockutils [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:29.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:31.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:31.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:31 np0005466030 podman[250681]: 2025-10-02 12:22:31.803038988 +0000 UTC m=+0.054256800 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:31 np0005466030 nova_compute[230518]: 2025-10-02 12:22:31.862 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407736.8599455, 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:31 np0005466030 nova_compute[230518]: 2025-10-02 12:22:31.862 2 INFO nova.compute.manager [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:22:31 np0005466030 nova_compute[230518]: 2025-10-02 12:22:31.895 2 DEBUG nova.compute.manager [None req-6f07d140-2de2-473c-9294-6e75e2936381 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:32 np0005466030 nova_compute[230518]: 2025-10-02 12:22:32.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:32 np0005466030 nova_compute[230518]: 2025-10-02 12:22:32.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:33.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:22:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:22:33 np0005466030 podman[250700]: 2025-10-02 12:22:33.841178045 +0000 UTC m=+0.097671128 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:35.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:37 np0005466030 nova_compute[230518]: 2025-10-02 12:22:37.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:37.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:37.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:37 np0005466030 nova_compute[230518]: 2025-10-02 12:22:37.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:39 np0005466030 nova_compute[230518]: 2025-10-02 12:22:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:39 np0005466030 nova_compute[230518]: 2025-10-02 12:22:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:39.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:40 np0005466030 nova_compute[230518]: 2025-10-02 12:22:40.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:40 np0005466030 podman[250751]: 2025-10-02 12:22:40.998414213 +0000 UTC m=+0.068316064 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:22:41 np0005466030 podman[250752]: 2025-10-02 12:22:41.00180905 +0000 UTC m=+0.073567709 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.099 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.099 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.099 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.100 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4094779463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.558 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:41.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:41.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.727 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.728 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4703MB free_disk=20.94662857055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bebdf690-5f58-4227-95e0-add2eae14645 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:22:41 np0005466030 nova_compute[230518]: 2025-10-02 12:22:41.961 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:42 np0005466030 nova_compute[230518]: 2025-10-02 12:22:42.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:22:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3141264522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:22:42 np0005466030 nova_compute[230518]: 2025-10-02 12:22:42.397 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:42 np0005466030 nova_compute[230518]: 2025-10-02 12:22:42.402 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:22:42 np0005466030 nova_compute[230518]: 2025-10-02 12:22:42.435 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:22:42 np0005466030 nova_compute[230518]: 2025-10-02 12:22:42.462 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:22:42 np0005466030 nova_compute[230518]: 2025-10-02 12:22:42.463 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:22:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:22:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:22:42 np0005466030 nova_compute[230518]: 2025-10-02 12:22:42.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:43 np0005466030 nova_compute[230518]: 2025-10-02 12:22:43.464 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:43 np0005466030 nova_compute[230518]: 2025-10-02 12:22:43.464 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:22:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:43.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.124 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.166 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.167 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance network_info: |[{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.169 2 DEBUG oslo_concurrency.lockutils [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.169 2 DEBUG nova.network.neutron [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port af15c204-50a0-4b32-a3a7-46c9b925ec87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.187 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start _get_guest_xml network_info=[{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T1
Oct  2 08:22:45 np0005466030 nova_compute[230518]: in_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0de00a00-9b68-498b-8bd0-88556bd22393', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0de00a00-9b68-498b-8bd0-88556bd22393', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bebdf690-5f58-4227-95e0-add2eae14645', 'attached_at': '', 'detached_at': '', 'volume_id': '0de00a00-9b68-498b-8bd0-88556bd22393', 'serial': '0de00a00-9b68-498b-8bd0-88556bd22393'}, 'boot_index': 0, 'attachment_id': 'a17f6c4e-3b3e-4e2a-bc65-6e16cb06b24f', 'guest_format': None, 'volume_type': None}, {'mount_device': '/dev/vdb', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b93e6b37-6df0-4c49-81d5-526e5c68b542', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b93e6b37-6df0-4c49-81d5-526e5c68b542', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bebdf690-5f58-4227-95e0-add2eae14645', 'attached_at': '', 'detached_at': '', 'volume_id': 'b93e6b37-6df0-4c49-81d5-526e5c68b542', 'serial': 'b93e6b37-6df0-4c49-81d5-526e5c68b542'}, 'boot_index': 1, 'attachment_id': 'eac04b36-4077-49c3-86b1-942e2b6eeb26', 'guest_format': None, 'volume_type': None}, {'mount_device': '/dev/vdc', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f3584061-a34e-4c55-a201-ea9e5f60b3e5', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f3584061-a34e-4c55-a201-ea9e5f60b3e5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bebdf690-5f58-4227-95e0-add2eae14645', 'attached_at': '', 'detached_at': '', 'volume_id': 'f3584061-a34e-4c55-a201-ea9e5f60b3e5', 'serial': 'f3584061-a34e-4c55-a201-ea9e5f60b3e5'}, 'boot_index': 2, 'attachment_id': '54598788-e706-49d2-9e91-e968eea915b1', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.193 2 WARNING nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.200 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.201 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.209 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.210 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.211 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.211 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.211 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.249 2 DEBUG nova.storage.rbd_utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] rbd image bebdf690-5f58-4227-95e0-add2eae14645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.254 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:45 np0005466030 rsyslogd[1006]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 12:22:45.187 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  2 08:22:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:45.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:22:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2272921810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.681 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.802 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.803 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.804 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.805 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.806 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.807 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.807 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.808 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.808 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.809 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.810 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.811 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.812 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.812 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.813 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.814 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.814 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.815 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.816 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.816 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.817 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.818 2 DEBUG nova.objects.instance [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lazy-loading 'pci_devices' on Instance uuid bebdf690-5f58-4227-95e0-add2eae14645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.840 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <uuid>bebdf690-5f58-4227-95e0-add2eae14645</uuid>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <name>instance-00000030</name>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <nova:name>tempest-device-tagging-server-1257127232</nova:name>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:22:45</nova:creationTime>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:user uuid="b978e493dbdc419e864471708c90b0b4">tempest-TaggedBootDevicesTest-1955030099-project-member</nova:user>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:project uuid="dcab4f3b7c604f47befdd0a52db26eea">tempest-TaggedBootDevicesTest-1955030099</nova:project>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:port uuid="6a13b8d9-269d-4176-b4c7-693a5e26e74b">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:port uuid="25721468-4447-4fb7-97f7-e805e64f0267">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.1.1.243" ipVersion="4"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:port uuid="8d3881e4-99fe-4bc5-b5ab-5b3f06be6000">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.1.1.231" ipVersion="4"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:port uuid="b84676b0-d376-4ced-99fb-08e677046d6f">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.1.1.189" ipVersion="4"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:port uuid="c9731d13-4315-4bdc-9d24-a91ce1d8d427">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.1.1.217" ipVersion="4"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:port uuid="2ef879b2-3519-40b6-8207-d24b0e1a39de">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <nova:port uuid="af15c204-50a0-4b32-a3a7-46c9b925ec87">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <entry name="serial">bebdf690-5f58-4227-95e0-add2eae14645</entry>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <entry name="uuid">bebdf690-5f58-4227-95e0-add2eae14645</entry>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/bebdf690-5f58-4227-95e0-add2eae14645_disk.config">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-0de00a00-9b68-498b-8bd0-88556bd22393">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <serial>0de00a00-9b68-498b-8bd0-88556bd22393</serial>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-b93e6b37-6df0-4c49-81d5-526e5c68b542">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <serial>b93e6b37-6df0-4c49-81d5-526e5c68b542</serial>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-f3584061-a34e-4c55-a201-ea9e5f60b3e5">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="vdc" bus="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <serial>f3584061-a34e-4c55-a201-ea9e5f60b3e5</serial>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:fb:a0:be"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="tap6a13b8d9-26"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:c5:de:0f"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="tap25721468-44"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:1e:64:8d"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="tap8d3881e4-99"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:7b:6e:55"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="tapb84676b0-d3"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:b6:e7:f1"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="tapc9731d13-43"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:28:b6:82"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="tap2ef879b2-35"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:a3:ce:42"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <target dev="tapaf15c204-50"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/console.log" append="off"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:22:45 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:22:45 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:22:45 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:22:45 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.843 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.843 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.843 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.844 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.844 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.845 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.845 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.845 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.846 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.846 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.846 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.847 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.847 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.847 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.850 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.850 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.850 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.851 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.851 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.851 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.853 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.854 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.855 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.855 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.858 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.862 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a13b8d9-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a13b8d9-26, col_values=(('external_ids', {'iface-id': '6a13b8d9-269d-4176-b4c7-693a5e26e74b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:a0:be', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 NetworkManager[44960]: <info>  [1759407765.8667] manager: (tap6a13b8d9-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.873 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26')#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.874 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.874 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.875 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.875 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.880 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25721468-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25721468-44, col_values=(('external_ids', {'iface-id': '25721468-4447-4fb7-97f7-e805e64f0267', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:de:0f', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 NetworkManager[44960]: <info>  [1759407765.8835] manager: (tap25721468-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.890 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44')#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.891 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.892 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.893 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.894 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d3881e4-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d3881e4-99, col_values=(('external_ids', {'iface-id': '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:64:8d', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 NetworkManager[44960]: <info>  [1759407765.9008] manager: (tap8d3881e4-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.911 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99')#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.911 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.912 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.912 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.913 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb84676b0-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb84676b0-d3, col_values=(('external_ids', {'iface-id': 'b84676b0-d376-4ced-99fb-08e677046d6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:6e:55', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 NetworkManager[44960]: <info>  [1759407765.9185] manager: (tapb84676b0-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.934 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3')#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.935 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.935 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9731d13-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9731d13-43, col_values=(('external_ids', {'iface-id': 'c9731d13-4315-4bdc-9d24-a91ce1d8d427', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:e7:f1', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 NetworkManager[44960]: <info>  [1759407765.9420] manager: (tapc9731d13-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.957 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43')#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.958 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.958 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ef879b2-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ef879b2-35, col_values=(('external_ids', {'iface-id': '2ef879b2-3519-40b6-8207-d24b0e1a39de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:b6:82', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 NetworkManager[44960]: <info>  [1759407765.9640] manager: (tap2ef879b2-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.981 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35')#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.982 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.982 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.983 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.983 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf15c204-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaf15c204-50, col_values=(('external_ids', {'iface-id': 'af15c204-50a0-4b32-a3a7-46c9b925ec87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:ce:42', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:45 np0005466030 NetworkManager[44960]: <info>  [1759407765.9881] manager: (tapaf15c204-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:45 np0005466030 nova_compute[230518]: 2025-10-02 12:22:45.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.005 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50')#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.223 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.224 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.224 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No VIF found with MAC fa:16:3e:fb:a0:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.225 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No VIF found with MAC fa:16:3e:b6:e7:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.226 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Using config drive#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.314 2 DEBUG nova.storage.rbd_utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] rbd image bebdf690-5f58-4227-95e0-add2eae14645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.907 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Creating config drive at /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config#033[00m
Oct  2 08:22:46 np0005466030 nova_compute[230518]: 2025-10-02 12:22:46.917 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbm0il_va execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:47 np0005466030 nova_compute[230518]: 2025-10-02 12:22:47.071 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbm0il_va" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:47 np0005466030 nova_compute[230518]: 2025-10-02 12:22:47.196 2 DEBUG nova.storage.rbd_utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] rbd image bebdf690-5f58-4227-95e0-add2eae14645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:22:47 np0005466030 nova_compute[230518]: 2025-10-02 12:22:47.200 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config bebdf690-5f58-4227-95e0-add2eae14645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:22:47 np0005466030 nova_compute[230518]: 2025-10-02 12:22:47.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.190 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config bebdf690-5f58-4227-95e0-add2eae14645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.191 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deleting local config drive /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config because it was imported into RBD.#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.2727] manager: (tap6a13b8d9-26): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Oct  2 08:22:48 np0005466030 kernel: tap6a13b8d9-26: entered promiscuous mode
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.2918] manager: (tap25721468-44): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct  2 08:22:48 np0005466030 kernel: tap25721468-44: entered promiscuous mode
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00189|binding|INFO|Claiming lport 25721468-4447-4fb7-97f7-e805e64f0267 for this chassis.
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00190|binding|INFO|25721468-4447-4fb7-97f7-e805e64f0267: Claiming fa:16:3e:c5:de:0f 10.1.1.243
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00191|binding|INFO|Claiming lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b for this chassis.
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00192|binding|INFO|6a13b8d9-269d-4176-b4c7-693a5e26e74b: Claiming fa:16:3e:fb:a0:be 10.100.0.9
Oct  2 08:22:48 np0005466030 systemd-udevd[251094]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:48 np0005466030 systemd-udevd[251097]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3145] manager: (tap8d3881e4-99): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Oct  2 08:22:48 np0005466030 systemd-udevd[251099]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3224] device (tap6a13b8d9-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3242] device (tap6a13b8d9-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.321 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:de:0f 10.1.1.243'], port_security=['fa:16:3e:c5:de:0f 10.1.1.243'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:cidrs': '10.1.1.243/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=25721468-4447-4fb7-97f7-e805e64f0267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.325 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:a0:be 10.100.0.9'], port_security=['fa:16:3e:fb:a0:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a116df-d45e-4936-a295-e45094ee631c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6a13b8d9-269d-4176-b4c7-693a5e26e74b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.328 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 25721468-4447-4fb7-97f7-e805e64f0267 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 bound to our chassis#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.330 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3316] device (tap25721468-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3328] device (tap25721468-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3359] manager: (tapb84676b0-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.346 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e659348-d1e5-4d2e-8a32-eba1e39e5d37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.347 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9aed857d-61 in ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.350 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9aed857d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93267881-7786-41cc-b272-f4703f0fc8bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[663f4d12-5ac2-416f-ba2b-d637d6060202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3587] manager: (tapc9731d13-43): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.368 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[920c22b4-6be2-4bf8-91ba-1266fe54b0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.3841] manager: (tap2ef879b2-35): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.400 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e55ff10f-fc8f-4d61-b461-569d8e01ccc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4109] manager: (tapaf15c204-50): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct  2 08:22:48 np0005466030 systemd-udevd[251104]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.424 2 DEBUG nova.network.neutron [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updated VIF entry in instance network info cache for port af15c204-50a0-4b32-a3a7-46c9b925ec87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.425 2 DEBUG nova.network.neutron [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:22:48 np0005466030 kernel: tap8d3881e4-99: entered promiscuous mode
Oct  2 08:22:48 np0005466030 kernel: tap2ef879b2-35: entered promiscuous mode
Oct  2 08:22:48 np0005466030 kernel: tapb84676b0-d3: entered promiscuous mode
Oct  2 08:22:48 np0005466030 kernel: tapaf15c204-50: entered promiscuous mode
Oct  2 08:22:48 np0005466030 kernel: tapc9731d13-43: entered promiscuous mode
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4349] device (tap8d3881e4-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4371] device (tap2ef879b2-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4384] device (tapb84676b0-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4395] device (tapaf15c204-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00193|binding|INFO|Claiming lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for this chassis.
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00194|binding|INFO|8d3881e4-99fe-4bc5-b5ab-5b3f06be6000: Claiming fa:16:3e:1e:64:8d 10.1.1.231
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00195|binding|INFO|Claiming lport 2ef879b2-3519-40b6-8207-d24b0e1a39de for this chassis.
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00196|binding|INFO|2ef879b2-3519-40b6-8207-d24b0e1a39de: Claiming fa:16:3e:28:b6:82 10.2.2.100
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00197|binding|INFO|Claiming lport af15c204-50a0-4b32-a3a7-46c9b925ec87 for this chassis.
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00198|binding|INFO|af15c204-50a0-4b32-a3a7-46c9b925ec87: Claiming fa:16:3e:a3:ce:42 10.2.2.200
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00199|binding|INFO|Claiming lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 for this chassis.
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00200|binding|INFO|c9731d13-4315-4bdc-9d24-a91ce1d8d427: Claiming fa:16:3e:b6:e7:f1 10.1.1.217
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00201|binding|INFO|Claiming lport b84676b0-d376-4ced-99fb-08e677046d6f for this chassis.
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00202|binding|INFO|b84676b0-d376-4ced-99fb-08e677046d6f: Claiming fa:16:3e:7b:6e:55 10.1.1.189
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4481] device (tapc9731d13-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4495] device (tap8d3881e4-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4509] device (tap2ef879b2-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.449 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff66ed2-da94-4aa5-a618-9b28c33aa3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.453 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:ce:42 10.2.2.200'], port_security=['fa:16:3e:a3:ce:42 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=af15c204-50a0-4b32-a3a7-46c9b925ec87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.455 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:64:8d 10.1.1.231'], port_security=['fa:16:3e:1e:64:8d 10.1.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:cidrs': '10.1.1.231/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.456 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:6e:55 10.1.1.189'], port_security=['fa:16:3e:7b:6e:55 10.1.1.189'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.189/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b84676b0-d376-4ced-99fb-08e677046d6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4577] device (tapb84676b0-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.456 2 DEBUG oslo_concurrency.lockutils [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4582] device (tapaf15c204-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4586] device (tapc9731d13-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00203|binding|INFO|Setting lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b ovn-installed in OVS
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00204|binding|INFO|Setting lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b up in Southbound
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00205|binding|INFO|Setting lport 25721468-4447-4fb7-97f7-e805e64f0267 ovn-installed in OVS
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00206|binding|INFO|Setting lport 25721468-4447-4fb7-97f7-e805e64f0267 up in Southbound
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.457 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:e7:f1 10.1.1.217'], port_security=['fa:16:3e:b6:e7:f1 10.1.1.217'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.217/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=c9731d13-4315-4bdc-9d24-a91ce1d8d427) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.459 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b6:82 10.2.2.100'], port_security=['fa:16:3e:28:b6:82 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2ef879b2-3519-40b6-8207-d24b0e1a39de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.462 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fadcfe2f-5413-437d-89d3-83c8f5409367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 systemd-machined[188247]: New machine qemu-27-instance-00000030.
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.4640] manager: (tap9aed857d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.489 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2b34e25b-d875-4504-bd61-68551d96c559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.491 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed48aa7-c237-4eb6-8146-73ffaa77c691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.5078] device (tap9aed857d-60): carrier: link connected
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.511 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ed5e5a-1e79-4b15-8fcd-a250ca4807f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.527 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[41bda245-eb35-4416-b414-086d5625261b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251148, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 systemd[1]: Started Virtual Machine qemu-27-instance-00000030.
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dadc8db1-bda3-4456-aa2c-c0c5f3b0759e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:1039'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563206, 'tstamp': 563206}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251149, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.564 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8dc04eb-88a3-4614-b1b8-7463a2f4ea42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251151, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d826962-0d7e-45ab-8c52-aca7b23d4f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00207|binding|INFO|Setting lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 ovn-installed in OVS
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00208|binding|INFO|Setting lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 up in Southbound
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00209|binding|INFO|Setting lport 2ef879b2-3519-40b6-8207-d24b0e1a39de ovn-installed in OVS
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00210|binding|INFO|Setting lport 2ef879b2-3519-40b6-8207-d24b0e1a39de up in Southbound
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00211|binding|INFO|Setting lport b84676b0-d376-4ced-99fb-08e677046d6f ovn-installed in OVS
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00212|binding|INFO|Setting lport b84676b0-d376-4ced-99fb-08e677046d6f up in Southbound
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00213|binding|INFO|Setting lport af15c204-50a0-4b32-a3a7-46c9b925ec87 ovn-installed in OVS
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00214|binding|INFO|Setting lport af15c204-50a0-4b32-a3a7-46c9b925ec87 up in Southbound
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00215|binding|INFO|Setting lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 ovn-installed in OVS
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00216|binding|INFO|Setting lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 up in Southbound
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.647 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[750cd72c-baa1-4000-89be-fbddc98b7948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.648 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.648 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.648 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 kernel: tap9aed857d-60: entered promiscuous mode
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.652 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:48Z|00217|binding|INFO|Releasing lport ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08 from this chassis (sb_readonly=0)
Oct  2 08:22:48 np0005466030 NetworkManager[44960]: <info>  [1759407768.6548] manager: (tap9aed857d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.672 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9aed857d-6573-41ca-b0a5-fcab18195955.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9aed857d-6573-41ca-b0a5-fcab18195955.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.673 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76337bd5-c2de-43a8-aa06-28da18354bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.674 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-9aed857d-6573-41ca-b0a5-fcab18195955
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/9aed857d-6573-41ca-b0a5-fcab18195955.pid.haproxy
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 9aed857d-6573-41ca-b0a5-fcab18195955
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:22:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.674 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'env', 'PROCESS_TAG=haproxy-9aed857d-6573-41ca-b0a5-fcab18195955', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9aed857d-6573-41ca-b0a5-fcab18195955.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.825 2 DEBUG nova.compute.manager [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.825 2 DEBUG oslo_concurrency.lockutils [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.826 2 DEBUG oslo_concurrency.lockutils [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.826 2 DEBUG oslo_concurrency.lockutils [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:48 np0005466030 nova_compute[230518]: 2025-10-02 12:22:48.826 2 DEBUG nova.compute.manager [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:49 np0005466030 nova_compute[230518]: 2025-10-02 12:22:49.032 2 DEBUG nova.compute.manager [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:49 np0005466030 nova_compute[230518]: 2025-10-02 12:22:49.034 2 DEBUG oslo_concurrency.lockutils [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:49 np0005466030 nova_compute[230518]: 2025-10-02 12:22:49.035 2 DEBUG oslo_concurrency.lockutils [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:49 np0005466030 nova_compute[230518]: 2025-10-02 12:22:49.035 2 DEBUG oslo_concurrency.lockutils [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:49 np0005466030 nova_compute[230518]: 2025-10-02 12:22:49.036 2 DEBUG nova.compute.manager [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:49 np0005466030 podman[251188]: 2025-10-02 12:22:49.041930535 +0000 UTC m=+0.026554388 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:22:49 np0005466030 podman[251188]: 2025-10-02 12:22:49.322138654 +0000 UTC m=+0.306762487 container create f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:22:49 np0005466030 systemd[1]: Started libpod-conmon-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5.scope.
Oct  2 08:22:49 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:22:49 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5e0d3911095a7aec742e67ce07e80aafbfc58c9f078a790a45d6aff0e44e5e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:22:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:49.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:49.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:49 np0005466030 podman[251188]: 2025-10-02 12:22:49.608002981 +0000 UTC m=+0.592626854 container init f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:22:49 np0005466030 podman[251188]: 2025-10-02 12:22:49.615220629 +0000 UTC m=+0.599844472 container start f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:22:49 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : New worker (251293) forked
Oct  2 08:22:49 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : Loading success.
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.813 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6a13b8d9-269d-4176-b4c7-693a5e26e74b in datapath bce86765-c9ec-46bc-a7a3-317bd0b94198 unbound from our chassis#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.818 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce86765-c9ec-46bc-a7a3-317bd0b94198#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.833 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[52af74a4-2f5a-4367-9058-68e8d9a4882d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.834 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbce86765-c1 in ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.836 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbce86765-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.836 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a33b4014-dc0b-4c2c-8164-b9cec8caa60a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.837 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c65f82fb-ad81-4136-a96d-48a641f5031d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.853 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc293fb-025a-4f24-baa4-3e09ef48c28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0be9e24e-874d-46de-afc1-1c246b2f740a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.921 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2075ae40-ab66-4032-8825-900bd68b9897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.929 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d7ea78-e94d-41b0-9cda-9419c2ec63bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 NetworkManager[44960]: <info>  [1759407769.9308] manager: (tapbce86765-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct  2 08:22:49 np0005466030 systemd-udevd[251139]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.966 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5f639628-f9c6-4fde-8444-ad0362bf6bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.969 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[96a38da3-9e9c-4d60-ba7e-d18558e4c319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:49 np0005466030 NetworkManager[44960]: <info>  [1759407769.9882] device (tapbce86765-c0): carrier: link connected
Oct  2 08:22:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.995 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[add7973e-db1a-4ad7-9198-4719cc23718e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.015 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[669c0999-f357-444d-8c96-95707722914f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce86765-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:67:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563354, 'reachable_time': 20734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251312, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.030 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3720f308-83b8-41c8-9e5b-46b5f6c62997]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:67f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563354, 'tstamp': 563354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251313, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.047 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[06ad60cf-4a0a-4104-b811-1043d9f6d59e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce86765-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:67:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563354, 'reachable_time': 20734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251314, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.077 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d518ca64-70ae-47a6-adb4-3f7330d2860a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.124 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d5673b62-9133-48c5-bfd7-565028ec65fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.126 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce86765-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.127 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.128 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce86765-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.130 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407770.130037, bebdf690-5f58-4227-95e0-add2eae14645 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:50 np0005466030 NetworkManager[44960]: <info>  [1759407770.1327] manager: (tapbce86765-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct  2 08:22:50 np0005466030 kernel: tapbce86765-c0: entered promiscuous mode
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.132 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Started (Lifecycle Event)#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.136 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce86765-c0, col_values=(('external_ids', {'iface-id': 'c8e6c95f-23ce-48d2-baf5-eb1177e1a7ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:50Z|00218|binding|INFO|Releasing lport c8e6c95f-23ce-48d2-baf5-eb1177e1a7ad from this chassis (sb_readonly=0)
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.152 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bce86765-c9ec-46bc-a7a3-317bd0b94198.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bce86765-c9ec-46bc-a7a3-317bd0b94198.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.153 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[662616de-103a-41ba-814c-c7a92e7d94a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.153 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-bce86765-c9ec-46bc-a7a3-317bd0b94198
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/bce86765-c9ec-46bc-a7a3-317bd0b94198.pid.haproxy
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID bce86765-c9ec-46bc-a7a3-317bd0b94198
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:22:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.154 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'env', 'PROCESS_TAG=haproxy-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bce86765-c9ec-46bc-a7a3-317bd0b94198.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.171 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.175 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407770.1308172, bebdf690-5f58-4227-95e0-add2eae14645 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.175 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.201 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.204 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.228 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:50 np0005466030 podman[251347]: 2025-10-02 12:22:50.528661038 +0000 UTC m=+0.029688956 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:22:50 np0005466030 podman[251347]: 2025-10-02 12:22:50.911933804 +0000 UTC m=+0.412961702 container create def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:22:50 np0005466030 nova_compute[230518]: 2025-10-02 12:22:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.050 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.050 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.051 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.051 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.051 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', 'b84676b0-d376-4ced-99fb-08e677046d6f'), ('network-vif-plugged', 'c9731d13-4315-4bdc-9d24-a91ce1d8d427'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de'), ('network-vif-plugged', 'af15c204-50a0-4b32-a3a7-46c9b925ec87')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.051 2 WARNING nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', 'c9731d13-4315-4bdc-9d24-a91ce1d8d427'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de'), ('network-vif-plugged', 'af15c204-50a0-4b32-a3a7-46c9b925ec87')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.054 2 WARNING nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.054 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.055 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.055 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.056 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.056 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:51 np0005466030 systemd[1]: Started libpod-conmon-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806.scope.
Oct  2 08:22:51 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:22:51 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f364814aae9bbb0a3b489cff6eafe4e2d92e7ede9b143c8dfd14625b67cb54df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:22:51 np0005466030 podman[251347]: 2025-10-02 12:22:51.231379139 +0000 UTC m=+0.732407097 container init def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:22:51 np0005466030 podman[251347]: 2025-10-02 12:22:51.242880961 +0000 UTC m=+0.743908889 container start def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:22:51 np0005466030 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : New worker (251368) forked
Oct  2 08:22:51 np0005466030 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : Loading success.
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.411 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.412 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.412 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.413 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.413 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de'), ('network-vif-plugged', 'af15c204-50a0-4b32-a3a7-46c9b925ec87')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.414 2 WARNING nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.414 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.415 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.415 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.416 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.416 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.417 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.417 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.418 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.418 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.419 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.419 2 WARNING nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.419 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.421 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.422 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.423 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.423 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.423 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.424 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.424 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.425 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.425 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.426 2 WARNING nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.450 138374 INFO neutron.agent.ovn.metadata.agent [-] Port af15c204-50a0-4b32-a3a7-46c9b925ec87 in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.454 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16f75dae-02da-4559-9be9-2b702ece41dd#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.467 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f495a1-7316-400b-8e39-dc4b6e7fab12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.468 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16f75dae-01 in ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.470 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16f75dae-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.470 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8f286956-1951-4e87-97d1-e800adff2fc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.471 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6620ddb5-b474-4175-87b5-ff2fb4111f1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.487 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c66ec3e1-3c29-4d16-8256-c49d54e5efff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[105baab9-fec3-4951-bd68-a3168d9f5147]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.535 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2cab047e-f797-4f03-92ca-95fd5b21ab06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 NetworkManager[44960]: <info>  [1759407771.5501] manager: (tap16f75dae-00): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.551 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f23e9c20-9b05-4398-a026-ff75043d47d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 systemd-udevd[251384]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.591 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0ad40a-b254-4b29-9a89-7af48e94fff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:22:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:51 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:51.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.595 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5f6a5e-0773-4b9a-9a09-5a5f246f4ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 NetworkManager[44960]: <info>  [1759407771.6355] device (tap16f75dae-00): carrier: link connected
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.646 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bf522c-42b3-4ca4-ad25-e8d754fa4d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.667 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[16500158-b548-4858-8181-6b8e5b7f3ed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f75dae-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:24:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563519, 'reachable_time': 33127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251403, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.687 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[695ea8e3-fb0c-439d-aa3c-272335dc9dec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:24fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563519, 'tstamp': 563519}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251404, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.704 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8083c0be-f101-401b-9ec7-a0a2d284bff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f75dae-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:24:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563519, 'reachable_time': 33127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251405, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.747 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2ef4ad-97c0-4e24-9503-d7eec77fcb51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.811 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[980c8bfd-eaa7-4c85-bd2b-226659680475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.812 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f75dae-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.812 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16f75dae-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:51 np0005466030 kernel: tap16f75dae-00: entered promiscuous mode
Oct  2 08:22:51 np0005466030 NetworkManager[44960]: <info>  [1759407771.8176] manager: (tap16f75dae-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.819 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16f75dae-00, col_values=(('external_ids', {'iface-id': '9d789382-766d-4e5f-a412-599c2cbcba28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:51Z|00219|binding|INFO|Releasing lport 9d789382-766d-4e5f-a412-599c2cbcba28 from this chassis (sb_readonly=0)
Oct  2 08:22:51 np0005466030 nova_compute[230518]: 2025-10-02 12:22:51.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.851 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16f75dae-02da-4559-9be9-2b702ece41dd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16f75dae-02da-4559-9be9-2b702ece41dd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.853 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfe67d9-d08e-490b-8091-7858d352b4aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.853 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-16f75dae-02da-4559-9be9-2b702ece41dd
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/16f75dae-02da-4559-9be9-2b702ece41dd.pid.haproxy
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 16f75dae-02da-4559-9be9-2b702ece41dd
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.854 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'env', 'PROCESS_TAG=haproxy-16f75dae-02da-4559-9be9-2b702ece41dd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16f75dae-02da-4559-9be9-2b702ece41dd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:22:52 np0005466030 nova_compute[230518]: 2025-10-02 12:22:52.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:52 np0005466030 podman[251438]: 2025-10-02 12:22:52.172948436 +0000 UTC m=+0.023195022 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:22:52 np0005466030 podman[251438]: 2025-10-02 12:22:52.575671185 +0000 UTC m=+0.425917721 container create 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:22:52 np0005466030 systemd[1]: Started libpod-conmon-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422.scope.
Oct  2 08:22:52 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:22:52 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b15385a9739d6fbf955e02628978d3ec9c263267cf4ee58ffc8102e691a426c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:22:53 np0005466030 podman[251438]: 2025-10-02 12:22:53.037221618 +0000 UTC m=+0.887468174 container init 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:22:53 np0005466030 podman[251438]: 2025-10-02 12:22:53.047895834 +0000 UTC m=+0.898142370 container start 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:22:53 np0005466030 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : New worker (251460) forked
Oct  2 08:22:53 np0005466030 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : Loading success.
Oct  2 08:22:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.211 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.215 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.229 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[921303fa-f63b-4652-8b62-5d688600e391]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.244 2 DEBUG nova.compute.manager [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.244 2 DEBUG oslo_concurrency.lockutils [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.245 2 DEBUG oslo_concurrency.lockutils [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.245 2 DEBUG oslo_concurrency.lockutils [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.245 2 DEBUG nova.compute.manager [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.245 2 WARNING nova.compute.manager [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.258 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8479871a-f4b2-4763-ad50-a52882ea6071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.261 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d820c79a-7ff5-411e-952b-4375cbf8fe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.293 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e7266416-9502-45f5-8185-e38fcba8a4a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.307 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a806bb5e-2d6c-471f-b50c-713eea89383a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251474, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[38353cca-f950-4b98-a05a-b8bfb1e2ccb7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563217, 'tstamp': 563217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251475, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563220, 'tstamp': 563220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251475, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.324 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.327 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.328 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.328 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.328 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.329 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b84676b0-d376-4ced-99fb-08e677046d6f in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.331 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.343 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3fce06de-5ef0-488b-a50e-c0743736cfcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.367 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8707686e-db5c-41f0-a041-3587d21f4130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.370 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bde50039-f2e3-44c8-90b1-cc61dde18bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.395 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd1cd59-aed6-435b-a194-6418f395bbff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.418 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f8b50e-f3fe-4908-bec4-157a1588cef7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251481, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.438 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcefc5b-0b1b-4a0f-9d1f-f6ebb1a14871]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563217, 'tstamp': 563217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251482, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563220, 'tstamp': 563220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251482, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.439 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.442 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.443 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.443 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.444 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.445 138374 INFO neutron.agent.ovn.metadata.agent [-] Port c9731d13-4315-4bdc-9d24-a91ce1d8d427 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.447 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.464 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0271e82-f070-4485-b9eb-b9d5c87071ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.497 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[28f6d8d1-ca58-468e-8811-fa60f6651a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.499 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d09c2342-a1be-4403-8ed9-e8bde131cd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.525 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1c6046-a05f-4240-b2db-7a592c16ecf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.542 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d9ccbd-0f4d-49b4-ab11-e5629b221fc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251488, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.558 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c95162d5-b7e1-49fa-a4fd-f6d43ab91979]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563217, 'tstamp': 563217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251489, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563220, 'tstamp': 563220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251489, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.560 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.563 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.563 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.563 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.564 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.564 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2ef879b2-3519-40b6-8207-d24b0e1a39de in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.566 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16f75dae-02da-4559-9be9-2b702ece41dd#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[257ad0e8-72ef-4b93-becb-6c50fd51a9f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:22:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:53.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:22:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.610 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa46478-f44d-44e2-88b1-3e949348dab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.613 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[87ff4ac0-7251-431f-acf0-da69218e5cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.635 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8c1448-1f05-4855-acc9-428b89002c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.655 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d707c88-2f92-4486-9de2-83e09135b318]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f75dae-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:24:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563519, 'reachable_time': 33127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251495, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.669 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9097f2aa-5563-4d59-abfe-266997914f7e]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap16f75dae-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563533, 'tstamp': 563533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251496, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16f75dae-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563536, 'tstamp': 563536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251496, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.670 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f75dae-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16f75dae-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16f75dae-00, col_values=(('external_ids', {'iface-id': '9d789382-766d-4e5f-a412-599c2cbcba28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:22:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.722 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.722 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.723 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.723 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.724 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.724 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.725 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.725 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.725 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.726 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.726 2 WARNING nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.727 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.732 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407773.7325814, bebdf690-5f58-4227-95e0-add2eae14645 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.733 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.735 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.740 2 INFO nova.virt.libvirt.driver [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance spawned successfully.#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.741 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.770 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.776 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.782 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.782 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.783 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.783 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.784 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.784 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.824 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.870 2 INFO nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 39.52 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.870 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.951 2 INFO nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 46.48 seconds to build instance.#033[00m
Oct  2 08:22:53 np0005466030 nova_compute[230518]: 2025-10-02 12:22:53.983 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 47.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:22:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Oct  2 08:22:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:22:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:55 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:55 np0005466030 nova_compute[230518]: 2025-10-02 12:22:55.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:57 np0005466030 nova_compute[230518]: 2025-10-02 12:22:57.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:22:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:22:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:22:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:57 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:57 np0005466030 nova_compute[230518]: 2025-10-02 12:22:57.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:57 np0005466030 NetworkManager[44960]: <info>  [1759407777.6461] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct  2 08:22:57 np0005466030 NetworkManager[44960]: <info>  [1759407777.6474] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct  2 08:22:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:22:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:22:57 np0005466030 nova_compute[230518]: 2025-10-02 12:22:57.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:57Z|00220|binding|INFO|Releasing lport c8e6c95f-23ce-48d2-baf5-eb1177e1a7ad from this chassis (sb_readonly=0)
Oct  2 08:22:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:57Z|00221|binding|INFO|Releasing lport ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08 from this chassis (sb_readonly=0)
Oct  2 08:22:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:22:57Z|00222|binding|INFO|Releasing lport 9d789382-766d-4e5f-a412-599c2cbcba28 from this chassis (sb_readonly=0)
Oct  2 08:22:57 np0005466030 nova_compute[230518]: 2025-10-02 12:22:57.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:22:58 np0005466030 nova_compute[230518]: 2025-10-02 12:22:58.042 2 DEBUG nova.compute.manager [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:22:58 np0005466030 nova_compute[230518]: 2025-10-02 12:22:58.042 2 DEBUG nova.compute.manager [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:22:58 np0005466030 nova_compute[230518]: 2025-10-02 12:22:58.043 2 DEBUG oslo_concurrency.lockutils [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:22:58 np0005466030 nova_compute[230518]: 2025-10-02 12:22:58.044 2 DEBUG oslo_concurrency.lockutils [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:22:58 np0005466030 nova_compute[230518]: 2025-10-02 12:22:58.044 2 DEBUG nova.network.neutron [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 6a13b8d9-269d-4176-b4c7-693a5e26e74b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:22:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:22:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:22:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:22:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:22:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:22:59 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:00 np0005466030 nova_compute[230518]: 2025-10-02 12:23:00.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:01 np0005466030 nova_compute[230518]: 2025-10-02 12:23:01.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:01 np0005466030 nova_compute[230518]: 2025-10-02 12:23:01.217 2 DEBUG nova.network.neutron [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updated VIF entry in instance network info cache for port 6a13b8d9-269d-4176-b4c7-693a5e26e74b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:23:01 np0005466030 nova_compute[230518]: 2025-10-02 12:23:01.218 2 DEBUG nova.network.neutron [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:01 np0005466030 nova_compute[230518]: 2025-10-02 12:23:01.249 2 DEBUG oslo_concurrency.lockutils [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:01.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:02 np0005466030 nova_compute[230518]: 2025-10-02 12:23:02.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:02 np0005466030 podman[251548]: 2025-10-02 12:23:02.83550454 +0000 UTC m=+0.080575569 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:23:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Oct  2 08:23:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:03.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Oct  2 08:23:04 np0005466030 podman[251570]: 2025-10-02 12:23:04.823868849 +0000 UTC m=+0.078801844 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:23:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:23:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:23:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:23:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:23:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:05.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:05 np0005466030 nova_compute[230518]: 2025-10-02 12:23:05.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:07 np0005466030 nova_compute[230518]: 2025-10-02 12:23:07.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:07 np0005466030 nova_compute[230518]: 2025-10-02 12:23:07.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:10 np0005466030 nova_compute[230518]: 2025-10-02 12:23:10.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:11 np0005466030 nova_compute[230518]: 2025-10-02 12:23:10.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Oct  2 08:23:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:11.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:11.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:11 np0005466030 podman[251597]: 2025-10-02 12:23:11.830017773 +0000 UTC m=+0.069910931 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:23:11 np0005466030 podman[251596]: 2025-10-02 12:23:11.837378725 +0000 UTC m=+0.083113327 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:23:12 np0005466030 nova_compute[230518]: 2025-10-02 12:23:12.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:13 np0005466030 nova_compute[230518]: 2025-10-02 12:23:13.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:13.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:15.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:15.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:16 np0005466030 nova_compute[230518]: 2025-10-02 12:23:16.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:17 np0005466030 nova_compute[230518]: 2025-10-02 12:23:17.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:17 np0005466030 nova_compute[230518]: 2025-10-02 12:23:17.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:17Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:e7:f1 10.1.1.217
Oct  2 08:23:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:17Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:e7:f1 10.1.1.217
Oct  2 08:23:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:17.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:17.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:64:8d 10.1.1.231
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:64:8d 10.1.1.231
Oct  2 08:23:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:ce:42 10.2.2.200
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:ce:42 10.2.2.200
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:b6:82 10.2.2.100
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:b6:82 10.2.2.100
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:de:0f 10.1.1.243
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:de:0f 10.1.1.243
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:a0:be 10.100.0.9
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:a0:be 10.100.0.9
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:6e:55 10.1.1.189
Oct  2 08:23:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:18Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:6e:55 10.1.1.189
Oct  2 08:23:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:19.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:19.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:21 np0005466030 nova_compute[230518]: 2025-10-02 12:23:21.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:21.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:21 np0005466030 nova_compute[230518]: 2025-10-02 12:23:21.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:22 np0005466030 nova_compute[230518]: 2025-10-02 12:23:22.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:22.105 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:22.106 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:23:22 np0005466030 nova_compute[230518]: 2025-10-02 12:23:22.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:23.109 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:23.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Oct  2 08:23:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:25.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:25.921 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:25.922 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:26 np0005466030 nova_compute[230518]: 2025-10-02 12:23:26.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:27 np0005466030 nova_compute[230518]: 2025-10-02 12:23:27.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:27.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:28 np0005466030 nova_compute[230518]: 2025-10-02 12:23:28.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:29Z|00223|memory|INFO|peak resident set size grew 51% in last 1664.4 seconds, from 16384 kB to 24748 kB
Oct  2 08:23:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:29Z|00224|memory|INFO|idl-cells-OVN_Southbound:11061 idl-cells-Open_vSwitch:1326 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:2 lflow-cache-entries-cache-expr:393 lflow-cache-entries-cache-matches:299 lflow-cache-size-KB:1618 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:784 ofctrl_installed_flow_usage-KB:571 ofctrl_sb_flow_ref_usage-KB:290
Oct  2 08:23:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:29.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:31 np0005466030 nova_compute[230518]: 2025-10-02 12:23:31.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:31.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:31.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:32 np0005466030 nova_compute[230518]: 2025-10-02 12:23:32.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:33.351 138528 DEBUG eventlet.wsgi.server [-] (138528) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:33.352 138528 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: Accept: */*#015
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: Connection: close#015
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: Content-Type: text/plain#015
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: Host: 169.254.169.254#015
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: User-Agent: curl/7.84.0#015
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: X-Forwarded-For: 10.100.0.9#015
Oct  2 08:23:33 np0005466030 ovn_metadata_agent[138369]: X-Ovn-Network-Id: bce86765-c9ec-46bc-a7a3-317bd0b94198 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  2 08:23:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:33.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:33.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:33 np0005466030 podman[251637]: 2025-10-02 12:23:33.799143971 +0000 UTC m=+0.054081153 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:23:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Oct  2 08:23:34 np0005466030 haproxy-metadata-proxy-bce86765-c9ec-46bc-a7a3-317bd0b94198[251368]: 10.100.0.9:38238 [02/Oct/2025:12:23:33.349] listener listener/metadata 0/0/0/1341/1341 200 2534 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct  2 08:23:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:34.691 138528 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  2 08:23:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:34.691 138528 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2550 time: 1.3390536#033[00m
Oct  2 08:23:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:35.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:35.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:35 np0005466030 podman[251656]: 2025-10-02 12:23:35.899480892 +0000 UTC m=+0.136337132 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.100 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.100 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.101 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.101 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.101 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.103 2 INFO nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Terminating instance#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.104 2 DEBUG nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:23:36 np0005466030 kernel: tap6a13b8d9-26 (unregistering): left promiscuous mode
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.1810] device (tap6a13b8d9-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00225|binding|INFO|Releasing lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b from this chassis (sb_readonly=0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00226|binding|INFO|Setting lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b down in Southbound
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00227|binding|INFO|Removing iface tap6a13b8d9-26 ovn-installed in OVS
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.208 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:a0:be 10.100.0.9'], port_security=['fa:16:3e:fb:a0:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a116df-d45e-4936-a295-e45094ee631c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6a13b8d9-269d-4176-b4c7-693a5e26e74b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.209 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6a13b8d9-269d-4176-b4c7-693a5e26e74b in datapath bce86765-c9ec-46bc-a7a3-317bd0b94198 unbound from our chassis#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.212 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bce86765-c9ec-46bc-a7a3-317bd0b94198, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.214 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[367daa65-91fa-4f0a-b6c7-67fb11f2ed7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.214 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 namespace which is not needed anymore#033[00m
Oct  2 08:23:36 np0005466030 kernel: tap25721468-44 (unregistering): left promiscuous mode
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.2258] device (tap25721468-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00228|binding|INFO|Releasing lport 25721468-4447-4fb7-97f7-e805e64f0267 from this chassis (sb_readonly=0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00229|binding|INFO|Setting lport 25721468-4447-4fb7-97f7-e805e64f0267 down in Southbound
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00230|binding|INFO|Removing iface tap25721468-44 ovn-installed in OVS
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.242 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:de:0f 10.1.1.243'], port_security=['fa:16:3e:c5:de:0f 10.1.1.243'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:cidrs': '10.1.1.243/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=25721468-4447-4fb7-97f7-e805e64f0267) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 kernel: tap8d3881e4-99 (unregistering): left promiscuous mode
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.2658] device (tap8d3881e4-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00231|binding|INFO|Releasing lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 from this chassis (sb_readonly=0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00232|binding|INFO|Setting lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 down in Southbound
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00233|binding|INFO|Removing iface tap8d3881e4-99 ovn-installed in OVS
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.288 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:64:8d 10.1.1.231'], port_security=['fa:16:3e:1e:64:8d 10.1.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:cidrs': '10.1.1.231/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 kernel: tapb84676b0-d3 (unregistering): left promiscuous mode
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.3111] device (tapb84676b0-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00234|binding|INFO|Releasing lport b84676b0-d376-4ced-99fb-08e677046d6f from this chassis (sb_readonly=0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00235|binding|INFO|Setting lport b84676b0-d376-4ced-99fb-08e677046d6f down in Southbound
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00236|binding|INFO|Removing iface tapb84676b0-d3 ovn-installed in OVS
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 kernel: tapc9731d13-43 (unregistering): left promiscuous mode
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.347 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:6e:55 10.1.1.189'], port_security=['fa:16:3e:7b:6e:55 10.1.1.189'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.189/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b84676b0-d376-4ced-99fb-08e677046d6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.3498] device (tapc9731d13-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : haproxy version is 2.8.14-c23fe91
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : path to executable is /usr/sbin/haproxy
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [WARNING]  (251366) : Exiting Master process...
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [ALERT]    (251366) : Current worker (251368) exited with code 143 (Terminated)
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [WARNING]  (251366) : All workers exited. Exiting... (0)
Oct  2 08:23:36 np0005466030 systemd[1]: libpod-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806.scope: Deactivated successfully.
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00237|binding|INFO|Releasing lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 from this chassis (sb_readonly=0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00238|binding|INFO|Setting lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 down in Southbound
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00239|binding|INFO|Removing iface tapc9731d13-43 ovn-installed in OVS
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 podman[251713]: 2025-10-02 12:23:36.370813452 +0000 UTC m=+0.057916923 container died def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.374 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:e7:f1 10.1.1.217'], port_security=['fa:16:3e:b6:e7:f1 10.1.1.217'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.217/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=c9731d13-4315-4bdc-9d24-a91ce1d8d427) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:36 np0005466030 kernel: tap2ef879b2-35 (unregistering): left promiscuous mode
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.3884] device (tap2ef879b2-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806-userdata-shm.mount: Deactivated successfully.
Oct  2 08:23:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay-f364814aae9bbb0a3b489cff6eafe4e2d92e7ede9b143c8dfd14625b67cb54df-merged.mount: Deactivated successfully.
Oct  2 08:23:36 np0005466030 kernel: tapaf15c204-50 (unregistering): left promiscuous mode
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00240|binding|INFO|Releasing lport 2ef879b2-3519-40b6-8207-d24b0e1a39de from this chassis (sb_readonly=0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00241|binding|INFO|Setting lport 2ef879b2-3519-40b6-8207-d24b0e1a39de down in Southbound
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.4250] device (tapaf15c204-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:23:36 np0005466030 podman[251713]: 2025-10-02 12:23:36.424979956 +0000 UTC m=+0.112083427 container cleanup def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00242|binding|INFO|Removing iface tap2ef879b2-35 ovn-installed in OVS
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.433 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b6:82 10.2.2.100'], port_security=['fa:16:3e:28:b6:82 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2ef879b2-3519-40b6-8207-d24b0e1a39de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:36 np0005466030 systemd[1]: libpod-conmon-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806.scope: Deactivated successfully.
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00243|binding|INFO|Releasing lport af15c204-50a0-4b32-a3a7-46c9b925ec87 from this chassis (sb_readonly=0)
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00244|binding|INFO|Setting lport af15c204-50a0-4b32-a3a7-46c9b925ec87 down in Southbound
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:36Z|00245|binding|INFO|Removing iface tapaf15c204-50 ovn-installed in OVS
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.474 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:ce:42 10.2.2.200'], port_security=['fa:16:3e:a3:ce:42 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=af15c204-50a0-4b32-a3a7-46c9b925ec87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:36 np0005466030 podman[251771]: 2025-10-02 12:23:36.488489454 +0000 UTC m=+0.041519897 container remove def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct  2 08:23:36 np0005466030 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000030.scope: Consumed 18.110s CPU time.
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.500 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93b14681-bfeb-4b4a-8812-7b7bea59eca4]: (4, ('Thu Oct  2 12:23:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 (def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806)\ndef4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806\nThu Oct  2 12:23:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 (def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806)\ndef4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 systemd-machined[188247]: Machine qemu-27-instance-00000030 terminated.
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.502 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb7485b-5fc9-436a-923a-f0e01276f6f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.503 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce86765-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 kernel: tapbce86765-c0: left promiscuous mode
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.5207] manager: (tap6a13b8d9-26): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.540 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ef39c3ab-377c-4b7f-8951-5302501e1bad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.5449] manager: (tap25721468-44): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.566 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3517e4a-9394-4663-9623-5b8b25715f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.567 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba10954a-3536-4fa0-b44a-8af75851c0a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.585 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2f928997-6db0-4080-a8aa-ec405b75a966]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563347, 'reachable_time': 31014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251840, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 systemd[1]: run-netns-ovnmeta\x2dbce86765\x2dc9ec\x2d46bc\x2da7a3\x2d317bd0b94198.mount: Deactivated successfully.
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.5935] manager: (tapc9731d13-43): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.592 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.592 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce2011b-5d78-44cb-b05d-823566e449c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.593 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 25721468-4447-4fb7-97f7-e805e64f0267 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.595 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.595 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d6630a3a-dbdc-41c1-aaee-9046d8fe582b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.596 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 namespace which is not needed anymore#033[00m
Oct  2 08:23:36 np0005466030 NetworkManager[44960]: <info>  [1759407816.5998] manager: (tap2ef879b2-35): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.638 2 INFO nova.virt.libvirt.driver [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance destroyed successfully.#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.639 2 DEBUG nova.objects.instance [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lazy-loading 'resources' on Instance uuid bebdf690-5f58-4227-95e0-add2eae14645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.659 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.660 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.661 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.661 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a13b8d9-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.684 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26')#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.684 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.685 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.685 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.686 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25721468-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.703 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44')#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.704 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.704 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.705 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.705 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d3881e4-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : haproxy version is 2.8.14-c23fe91
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : path to executable is /usr/sbin/haproxy
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [WARNING]  (251291) : Exiting Master process...
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [WARNING]  (251291) : Exiting Master process...
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [ALERT]    (251291) : Current worker (251293) exited with code 143 (Terminated)
Oct  2 08:23:36 np0005466030 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [WARNING]  (251291) : All workers exited. Exiting... (0)
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.720 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99')#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.721 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.721 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:36 np0005466030 systemd[1]: libpod-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5.scope: Deactivated successfully.
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.722 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.722 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb84676b0-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:36 np0005466030 podman[251911]: 2025-10-02 12:23:36.730061746 +0000 UTC m=+0.044416129 container died f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.736 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3')#033[00m
Oct  2 08:23:36 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.737 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:36 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.737 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.738 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.738 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9731d13-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.750 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43')#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.751 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.752 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.752 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.753 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.754 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ef879b2-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5-userdata-shm.mount: Deactivated successfully.
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay-ac5e0d3911095a7aec742e67ce07e80aafbfc58c9f078a790a45d6aff0e44e5e-merged.mount: Deactivated successfully.
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.761 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35')#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.762 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.763 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.763 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.764 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.766 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf15c204-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 podman[251911]: 2025-10-02 12:23:36.76705072 +0000 UTC m=+0.081405123 container cleanup f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.774 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50')#033[00m
Oct  2 08:23:36 np0005466030 systemd[1]: libpod-conmon-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5.scope: Deactivated successfully.
Oct  2 08:23:36 np0005466030 podman[251963]: 2025-10-02 12:23:36.82901031 +0000 UTC m=+0.039431422 container remove f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.834 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e7763710-4581-4f22-9ca7-cadc486be2cf]: (4, ('Thu Oct  2 12:23:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 (f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5)\nf28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5\nThu Oct  2 12:23:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 (f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5)\nf28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.836 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[788c2925-63fa-4558-805f-1d22ca4ea717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.837 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:36 np0005466030 kernel: tap9aed857d-60: left promiscuous mode
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.842 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83cbc486-d5a1-42c0-9346-998b1889a75b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 nova_compute[230518]: 2025-10-02 12:23:36.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.873 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b08b4816-b18a-4a5c-892a-637cc7315dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.875 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[226aa36a-c1db-4e37-b29b-03dd8744b4f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.889 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6300970f-afe3-4edd-bd0d-b3944452dd29]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563199, 'reachable_time': 15530, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251994, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.891 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.891 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[a7536fd9-ce21-47c7-9ed7-baed2d7180c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.891 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.893 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8f3f75-510e-4443-8207-f892e88a4ee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.894 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b84676b0-d376-4ced-99fb-08e677046d6f in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.895 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.896 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2f14f8-9ecf-424b-9da8-bf2135185678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.896 138374 INFO neutron.agent.ovn.metadata.agent [-] Port c9731d13-4315-4bdc-9d24-a91ce1d8d427 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.898 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.898 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6c8c8b-06cf-40e1-89d4-9dbf5e81096a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.898 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2ef879b2-3519-40b6-8207-d24b0e1a39de in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.900 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16f75dae-02da-4559-9be9-2b702ece41dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.900 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[123fdf91-247a-4703-aa30-f52b5e85f0f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.901 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd namespace which is not needed anymore#033[00m
Oct  2 08:23:37 np0005466030 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : haproxy version is 2.8.14-c23fe91
Oct  2 08:23:37 np0005466030 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : path to executable is /usr/sbin/haproxy
Oct  2 08:23:37 np0005466030 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [WARNING]  (251458) : Exiting Master process...
Oct  2 08:23:37 np0005466030 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [ALERT]    (251458) : Current worker (251460) exited with code 143 (Terminated)
Oct  2 08:23:37 np0005466030 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [WARNING]  (251458) : All workers exited. Exiting... (0)
Oct  2 08:23:37 np0005466030 systemd[1]: libpod-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422.scope: Deactivated successfully.
Oct  2 08:23:37 np0005466030 podman[252013]: 2025-10-02 12:23:37.014549418 +0000 UTC m=+0.040933929 container died 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:23:37 np0005466030 podman[252013]: 2025-10-02 12:23:37.046368499 +0000 UTC m=+0.072753010 container cleanup 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:23:37 np0005466030 systemd[1]: libpod-conmon-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422.scope: Deactivated successfully.
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.072 2 DEBUG nova.compute.manager [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG oslo_concurrency.lockutils [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG oslo_concurrency.lockutils [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG oslo_concurrency.lockutils [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG nova.compute.manager [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG nova.compute.manager [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:37 np0005466030 podman[252044]: 2025-10-02 12:23:37.099680116 +0000 UTC m=+0.035569060 container remove 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[956f3f3d-a901-43fd-84e0-36cb7b41b071]: (4, ('Thu Oct  2 12:23:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd (7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422)\n7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422\nThu Oct  2 12:23:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd (7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422)\n7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d86a73fd-d575-43d1-bd7e-564fe89d1b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.111 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f75dae-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:37 np0005466030 kernel: tap16f75dae-00: left promiscuous mode
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.116 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0507c919-7f88-443e-bb62-33e2f2940356]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.136 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[88192782-79aa-4e19-8765-d9fa9fcedba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.137 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd05c9d1-c800-40dd-877e-3f9db24a136b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.158 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdb00a4-79e1-4d05-9432-cd95ec9d7c97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563508, 'reachable_time': 18093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252059, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.160 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.160 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fd11c1eb-7de8-4145-bddf-de8f83b9f13a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.161 138374 INFO neutron.agent.ovn.metadata.agent [-] Port af15c204-50a0-4b32-a3a7-46c9b925ec87 in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.163 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16f75dae-02da-4559-9be9-2b702ece41dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:23:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.164 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff275c0-198f-4d03-920f-8bd89e3e3134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.278 2 DEBUG nova.compute.manager [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.278 2 DEBUG oslo_concurrency.lockutils [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.278 2 DEBUG oslo_concurrency.lockutils [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.279 2 DEBUG oslo_concurrency.lockutils [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.279 2 DEBUG nova.compute.manager [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-25721468-4447-4fb7-97f7-e805e64f0267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.279 2 DEBUG nova.compute.manager [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-25721468-4447-4fb7-97f7-e805e64f0267 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:37 np0005466030 systemd[1]: var-lib-containers-storage-overlay-b15385a9739d6fbf955e02628978d3ec9c263267cf4ee58ffc8102e691a426c5-merged.mount: Deactivated successfully.
Oct  2 08:23:37 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422-userdata-shm.mount: Deactivated successfully.
Oct  2 08:23:37 np0005466030 systemd[1]: run-netns-ovnmeta\x2d16f75dae\x2d02da\x2d4559\x2d9be9\x2d2b702ece41dd.mount: Deactivated successfully.
Oct  2 08:23:37 np0005466030 systemd[1]: run-netns-ovnmeta\x2d9aed857d\x2d6573\x2d41ca\x2db0a5\x2dfcab18195955.mount: Deactivated successfully.
Oct  2 08:23:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:37.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:37.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.856 2 INFO nova.virt.libvirt.driver [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deleting instance files /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645_del#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.857 2 INFO nova.virt.libvirt.driver [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deletion of /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645_del complete#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.990 2 INFO nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 1.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.991 2 DEBUG oslo.service.loopingcall [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.991 2 DEBUG nova.compute.manager [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:23:37 np0005466030 nova_compute[230518]: 2025-10-02 12:23:37.992 2 DEBUG nova.network.neutron [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:23:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.226 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.228 2 WARNING nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-b84676b0-d376-4ced-99fb-08e677046d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-b84676b0-d376-4ced-99fb-08e677046d6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.230 2 WARNING nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.232 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.232 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.232 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.232 2 WARNING nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.445 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.445 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.446 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.446 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.447 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.447 2 WARNING nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.448 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.448 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.449 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.449 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.449 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.450 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.450 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.451 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.451 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.452 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.452 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.452 2 WARNING nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.453 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.453 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.454 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.454 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.454 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-2ef879b2-3519-40b6-8207-d24b0e1a39de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:39 np0005466030 nova_compute[230518]: 2025-10-02 12:23:39.455 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-2ef879b2-3519-40b6-8207-d24b0e1a39de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:39.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:40 np0005466030 nova_compute[230518]: 2025-10-02 12:23:40.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:40 np0005466030 nova_compute[230518]: 2025-10-02 12:23:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.264 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.265 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.265 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.265 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.266 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:41.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:41.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/550051481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.721 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.877 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.878 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4685MB free_disk=20.942630767822266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.878 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:41 np0005466030 nova_compute[230518]: 2025-10-02 12:23:41.878 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.036 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bebdf690-5f58-4227-95e0-add2eae14645 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.037 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.037 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.087 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.211 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.213 2 WARNING nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.215 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.215 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.215 2 WARNING nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Oct  2 08:23:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3492679320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.515 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.521 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.592 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.727 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:23:42 np0005466030 nova_compute[230518]: 2025-10-02 12:23:42.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:42 np0005466030 podman[252105]: 2025-10-02 12:23:42.837680168 +0000 UTC m=+0.077809409 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:23:42 np0005466030 podman[252106]: 2025-10-02 12:23:42.838194174 +0000 UTC m=+0.079236504 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:23:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:43.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.339 2 DEBUG nova.compute.manager [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.340 2 INFO nova.compute.manager [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface 2ef879b2-3519-40b6-8207-d24b0e1a39de; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.340 2 DEBUG nova.network.neutron [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.395 2 DEBUG nova.compute.manager [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=2ef879b2-3519-40b6-8207-d24b0e1a39de, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.729 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.757 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.758 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:44 np0005466030 nova_compute[230518]: 2025-10-02 12:23:44.759 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.109 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.110 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.149 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.270 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.271 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.278 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.278 2 INFO nova.compute.claims [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.475 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:45.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3521193034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.927 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.932 2 DEBUG nova.compute.provider_tree [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.959 2 DEBUG nova.scheduler.client.report [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.987 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:45 np0005466030 nova_compute[230518]: 2025-10-02 12:23:45.987 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.064 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.064 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.102 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.137 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.253 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.254 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.255 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Creating image(s)#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.289 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.325 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.357 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.361 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.442 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.443 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.443 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.444 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.473 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.477 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c1597192-3527-4620-a21f-0e71c9c1c09d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.509 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.509 2 INFO nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface 6a13b8d9-269d-4176-b4c7-693a5e26e74b; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.509 2 DEBUG nova.network.neutron [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.571 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=6a13b8d9-269d-4176-b4c7-693a5e26e74b, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.571 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.571 2 INFO nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface c9731d13-4315-4bdc-9d24-a91ce1d8d427; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.572 2 DEBUG nova.network.neutron [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.595 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=c9731d13-4315-4bdc-9d24-a91ce1d8d427, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.595 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.596 2 INFO nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface b84676b0-d376-4ced-99fb-08e677046d6f; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.596 2 DEBUG nova.network.neutron [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.632 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=b84676b0-d376-4ced-99fb-08e677046d6f, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.751 2 DEBUG nova.policy [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:23:46 np0005466030 nova_compute[230518]: 2025-10-02 12:23:46.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.170 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.171 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.171 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.592 2 DEBUG nova.network.neutron [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.647 2 INFO nova.compute.manager [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 9.65 seconds to deallocate network for instance.#033[00m
Oct  2 08:23:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:47.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:47 np0005466030 nova_compute[230518]: 2025-10-02 12:23:47.970 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Successfully created port: a27d28bd-0aac-45b1-9b85-fa648038cccc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:23:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.589 2 INFO nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 0.94 seconds to detach 3 volumes for instance.#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.609 2 DEBUG nova.compute.manager [req-a7073526-4c17-42ca-8995-d797c7f6a4b7 req-872e9e0e-1d08-46ad-a612-94b08c748be2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.650 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.651 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.687 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c1597192-3527-4620-a21f-0e71c9c1c09d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.800 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] resizing rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.855 2 DEBUG oslo_concurrency.processutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.967 2 DEBUG nova.objects.instance [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid c1597192-3527-4620-a21f-0e71c9c1c09d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.990 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.991 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Ensure instance console log exists: /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.991 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.992 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:48 np0005466030 nova_compute[230518]: 2025-10-02 12:23:48.993 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.294 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Successfully updated port: a27d28bd-0aac-45b1-9b85-fa648038cccc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:23:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:23:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/334952972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.314 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.314 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.314 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.317 2 DEBUG oslo_concurrency.processutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.322 2 DEBUG nova.compute.provider_tree [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.343 2 DEBUG nova.scheduler.client.report [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.369 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.403 2 INFO nova.scheduler.client.report [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Deleted allocations for instance bebdf690-5f58-4227-95e0-add2eae14645#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.460 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:49 np0005466030 nova_compute[230518]: 2025-10-02 12:23:49.512 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:23:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:49.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:23:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:49.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.691 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updating instance_info_cache with network_info: [{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.769 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-changed-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.769 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Refreshing instance network info cache due to event network-changed-a27d28bd-0aac-45b1-9b85-fa648038cccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.770 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.802 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.803 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance network_info: |[{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.804 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.805 2 DEBUG nova.network.neutron [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Refreshing network info cache for port a27d28bd-0aac-45b1-9b85-fa648038cccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.810 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start _get_guest_xml network_info=[{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.815 2 WARNING nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.837 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.838 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.841 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.841 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.843 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.843 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.844 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.844 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.844 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.845 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.845 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.845 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:23:50 np0005466030 nova_compute[230518]: 2025-10-02 12:23:50.851 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1853136177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:51 np0005466030 nova_compute[230518]: 2025-10-02 12:23:51.637 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407816.6360064, bebdf690-5f58-4227-95e0-add2eae14645 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:51 np0005466030 nova_compute[230518]: 2025-10-02 12:23:51.638 2 INFO nova.compute.manager [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:23:51 np0005466030 nova_compute[230518]: 2025-10-02 12:23:51.661 2 DEBUG nova.compute.manager [None req-b177e91f-18f6-43b9-bf0d-e74fe8557074 - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:51.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:51.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:51 np0005466030 nova_compute[230518]: 2025-10-02 12:23:51.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.113 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.147 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.152 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:23:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3028286666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.673 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.675 2 DEBUG nova.virt.libvirt.vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-984879556',display_name='tempest-ImagesTestJSON-server-984879556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-984879556',id=54,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0xmwc9tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:46Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=c1597192-3527-4620-a21f-0e71c9c1c09d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.675 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.676 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.677 2 DEBUG nova.objects.instance [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1597192-3527-4620-a21f-0e71c9c1c09d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.696 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <uuid>c1597192-3527-4620-a21f-0e71c9c1c09d</uuid>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <name>instance-00000036</name>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <nova:name>tempest-ImagesTestJSON-server-984879556</nova:name>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:23:50</nova:creationTime>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <nova:port uuid="a27d28bd-0aac-45b1-9b85-fa648038cccc">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <entry name="serial">c1597192-3527-4620-a21f-0e71c9c1c09d</entry>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <entry name="uuid">c1597192-3527-4620-a21f-0e71c9c1c09d</entry>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c1597192-3527-4620-a21f-0e71c9c1c09d_disk">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:bb:14:6c"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <target dev="tapa27d28bd-0a"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/console.log" append="off"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:23:52 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:23:52 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:23:52 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:23:52 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.698 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Preparing to wait for external event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.698 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.698 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.699 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.699 2 DEBUG nova.virt.libvirt.vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-984879556',display_name='tempest-ImagesTestJSON-server-984879556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-984879556',id=54,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0xmwc9tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:46Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=c1597192-3527-4620-a21f-0e71c9c1c09d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.700 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.700 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.701 2 DEBUG os_vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa27d28bd-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa27d28bd-0a, col_values=(('external_ids', {'iface-id': 'a27d28bd-0aac-45b1-9b85-fa648038cccc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:14:6c', 'vm-uuid': 'c1597192-3527-4620-a21f-0e71c9c1c09d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466030 NetworkManager[44960]: <info>  [1759407832.7095] manager: (tapa27d28bd-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.714 2 INFO os_vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a')#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.886 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.886 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.887 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:bb:14:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.887 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Using config drive#033[00m
Oct  2 08:23:52 np0005466030 nova_compute[230518]: 2025-10-02 12:23:52.911 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.341 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Creating config drive at /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config#033[00m
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.350 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl2zi_8t_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.504 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl2zi_8t_" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.548 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.554 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.589 2 DEBUG nova.network.neutron [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updated VIF entry in instance network info cache for port a27d28bd-0aac-45b1-9b85-fa648038cccc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.591 2 DEBUG nova.network.neutron [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updating instance_info_cache with network_info: [{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:23:53 np0005466030 nova_compute[230518]: 2025-10-02 12:23:53.622 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:23:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:23:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:53.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:23:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:56 np0005466030 nova_compute[230518]: 2025-10-02 12:23:56.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:56 np0005466030 nova_compute[230518]: 2025-10-02 12:23:56.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:56 np0005466030 nova_compute[230518]: 2025-10-02 12:23:56.990 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:23:56 np0005466030 nova_compute[230518]: 2025-10-02 12:23:56.991 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deleting local config drive /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config because it was imported into RBD.#033[00m
Oct  2 08:23:57 np0005466030 kernel: tapa27d28bd-0a: entered promiscuous mode
Oct  2 08:23:57 np0005466030 NetworkManager[44960]: <info>  [1759407837.0505] manager: (tapa27d28bd-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:57Z|00246|binding|INFO|Claiming lport a27d28bd-0aac-45b1-9b85-fa648038cccc for this chassis.
Oct  2 08:23:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:57Z|00247|binding|INFO|a27d28bd-0aac-45b1-9b85-fa648038cccc: Claiming fa:16:3e:bb:14:6c 10.100.0.3
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.063 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:14:6c 10.100.0.3'], port_security=['fa:16:3e:bb:14:6c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c1597192-3527-4620-a21f-0e71c9c1c09d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a27d28bd-0aac-45b1-9b85-fa648038cccc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.065 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a27d28bd-0aac-45b1-9b85-fa648038cccc in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.067 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.078 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0d16d9-c3ea-4be4-b57a-ff7d4f3eaa59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.079 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.081 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.081 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8bebe24b-22f8-4f6a-b8c4-3fc725984841]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c32f6b4e-b039-41c8-87bd-55e303e652d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 systemd-udevd[252493]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.093 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1747ef6f-8d89-46d6-acfe-972345d7e263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 systemd-machined[188247]: New machine qemu-28-instance-00000036.
Oct  2 08:23:57 np0005466030 NetworkManager[44960]: <info>  [1759407837.0982] device (tapa27d28bd-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:23:57 np0005466030 NetworkManager[44960]: <info>  [1759407837.0995] device (tapa27d28bd-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:23:57 np0005466030 systemd[1]: Started Virtual Machine qemu-28-instance-00000036.
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.117 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[77915704-3108-4d50-a1e0-ac51498f3f1f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:57Z|00248|binding|INFO|Setting lport a27d28bd-0aac-45b1-9b85-fa648038cccc ovn-installed in OVS
Oct  2 08:23:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:57Z|00249|binding|INFO|Setting lport a27d28bd-0aac-45b1-9b85-fa648038cccc up in Southbound
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.142 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3866b2-8ff9-4ed0-98cf-e5b39af3645c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.161 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2a45c88a-692d-4d0f-b5cd-94f55ca37527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 NetworkManager[44960]: <info>  [1759407837.1622] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.190 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[05cb5af5-21b7-4e51-a1bf-01f3840648c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.193 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f91c1c-0110-4862-9f41-326344915df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 NetworkManager[44960]: <info>  [1759407837.2199] device (tapd68ff9e0-a0): carrier: link connected
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.225 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7c338eaa-193e-4722-b0d3-bf6523353ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.240 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[596d6cdd-55c2-4266-a88d-ee3c39b8fea2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570077, 'reachable_time': 26339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252526, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01c2d87a-2a35-4be9-95a2-31ecd9eeeb6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570077, 'tstamp': 570077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252527, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.273 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8818ad9-451b-47b7-860f-9eaa1932db07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570077, 'reachable_time': 26339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252528, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.302 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae80e06-fc08-4aa3-8955-1d7bc18e63a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6f3abf-c65c-437b-af8e-0453604b95b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.361 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.362 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.362 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct  2 08:23:57 np0005466030 NetworkManager[44960]: <info>  [1759407837.3643] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.370 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:23:57Z|00250|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=0)
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.385 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f27569d0-dd12-41e4-a54e-e08e510fdf19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.387 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:23:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.387 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:23:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:57.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:57 np0005466030 nova_compute[230518]: 2025-10-02 12:23:57.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:23:57 np0005466030 podman[252660]: 2025-10-02 12:23:57.776047747 +0000 UTC m=+0.080225255 container create 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:23:57 np0005466030 systemd[1]: Started libpod-conmon-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102.scope.
Oct  2 08:23:57 np0005466030 podman[252660]: 2025-10-02 12:23:57.736750381 +0000 UTC m=+0.040927879 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:23:57 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:23:57 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8253beedaf47a6624a8aab6d6b150b50161b15aea48a77330b21dba701fe6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:23:57 np0005466030 podman[252660]: 2025-10-02 12:23:57.876035013 +0000 UTC m=+0.180212521 container init 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:23:57 np0005466030 podman[252660]: 2025-10-02 12:23:57.88355317 +0000 UTC m=+0.187730658 container start 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:23:57 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : New worker (252697) forked
Oct  2 08:23:57 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : Loading success.
Oct  2 08:23:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.150 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407839.1495578, c1597192-3527-4620-a21f-0e71c9c1c09d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.151 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Started (Lifecycle Event)#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.201 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.210 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407839.150226, c1597192-3527-4620-a21f-0e71c9c1c09d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.211 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.354 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.359 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.426 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.580 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.581 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.581 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.582 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.582 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Processing event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.582 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.583 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.583 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.583 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.584 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] No waiting events found dispatching network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.584 2 WARNING nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received unexpected event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.585 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.589 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407839.5880425, c1597192-3527-4620-a21f-0e71c9c1c09d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.589 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.591 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.595 2 INFO nova.virt.libvirt.driver [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance spawned successfully.#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.596 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.625 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.629 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.630 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.630 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.630 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.631 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.631 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.635 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.663 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:23:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:59.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:23:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:23:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:59.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.702 2 INFO nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 13.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.703 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.788 2 INFO nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 14.54 seconds to build instance.#033[00m
Oct  2 08:23:59 np0005466030 nova_compute[230518]: 2025-10-02 12:23:59.826 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:24:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:24:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:24:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:24:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:24:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:24:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:01.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:01 np0005466030 nova_compute[230518]: 2025-10-02 12:24:01.844 2 DEBUG nova.compute.manager [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:01 np0005466030 nova_compute[230518]: 2025-10-02 12:24:01.924 2 INFO nova.compute.manager [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] instance snapshotting#033[00m
Oct  2 08:24:02 np0005466030 nova_compute[230518]: 2025-10-02 12:24:02.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:02 np0005466030 nova_compute[230518]: 2025-10-02 12:24:02.275 2 INFO nova.virt.libvirt.driver [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Beginning live snapshot process#033[00m
Oct  2 08:24:02 np0005466030 nova_compute[230518]: 2025-10-02 12:24:02.495 2 DEBUG nova.virt.libvirt.imagebackend [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:24:02 np0005466030 nova_compute[230518]: 2025-10-02 12:24:02.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:02 np0005466030 nova_compute[230518]: 2025-10-02 12:24:02.911 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(de7f12910f6e4a0f8d9cc3e61ebe2d70) on rbd image(c1597192-3527-4620-a21f-0e71c9c1c09d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:24:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:03.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Oct  2 08:24:04 np0005466030 podman[252816]: 2025-10-02 12:24:04.804118272 +0000 UTC m=+0.056583281 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 08:24:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:24:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:24:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:24:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:24:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:05.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:06 np0005466030 podman[252835]: 2025-10-02 12:24:06.824891838 +0000 UTC m=+0.078463029 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct  2 08:24:07 np0005466030 nova_compute[230518]: 2025-10-02 12:24:07.046 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning vms/c1597192-3527-4620-a21f-0e71c9c1c09d_disk@de7f12910f6e4a0f8d9cc3e61ebe2d70 to images/cf745836-410d-40ce-8229-51a950b72ba1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:24:07 np0005466030 nova_compute[230518]: 2025-10-02 12:24:07.171 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] flattening images/cf745836-410d-40ce-8229-51a950b72ba1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:24:07 np0005466030 nova_compute[230518]: 2025-10-02 12:24:07.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:24:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:24:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:24:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:24:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:07.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:07 np0005466030 nova_compute[230518]: 2025-10-02 12:24:07.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:09.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.952148) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849952197, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2807, "num_deletes": 518, "total_data_size": 5958896, "memory_usage": 6044920, "flush_reason": "Manual Compaction"}
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849968543, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3917090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30835, "largest_seqno": 33637, "table_properties": {"data_size": 3905675, "index_size": 7013, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26911, "raw_average_key_size": 20, "raw_value_size": 3880750, "raw_average_value_size": 2917, "num_data_blocks": 302, "num_entries": 1330, "num_filter_entries": 1330, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407647, "oldest_key_time": 1759407647, "file_creation_time": 1759407849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 16438 microseconds, and 7546 cpu microseconds.
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.968583) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3917090 bytes OK
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.968603) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.970363) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.970375) EVENT_LOG_v1 {"time_micros": 1759407849970371, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.970389) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5945329, prev total WAL file size 5945329, number of live WAL files 2.
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.971501) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3825KB)], [60(8427KB)]
Oct  2 08:24:09 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849971526, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12547249, "oldest_snapshot_seqno": -1}
Oct  2 08:24:09 np0005466030 nova_compute[230518]: 2025-10-02 12:24:09.989 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(de7f12910f6e4a0f8d9cc3e61ebe2d70) on rbd image(c1597192-3527-4620-a21f-0e71c9c1c09d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 5687 keys, 10471001 bytes, temperature: kUnknown
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850033023, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 10471001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10430571, "index_size": 25088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 146270, "raw_average_key_size": 25, "raw_value_size": 10325917, "raw_average_value_size": 1815, "num_data_blocks": 1008, "num_entries": 5687, "num_filter_entries": 5687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.033434) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10471001 bytes
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.035602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.6 rd, 169.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6739, records dropped: 1052 output_compression: NoCompression
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.035658) EVENT_LOG_v1 {"time_micros": 1759407850035637, "job": 36, "event": "compaction_finished", "compaction_time_micros": 61613, "compaction_time_cpu_micros": 21429, "output_level": 6, "num_output_files": 1, "total_output_size": 10471001, "num_input_records": 6739, "num_output_records": 5687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850037116, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850039019, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.971439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:24:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:24:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:11.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:24:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:12 np0005466030 nova_compute[230518]: 2025-10-02 12:24:12.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:12 np0005466030 nova_compute[230518]: 2025-10-02 12:24:12.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Oct  2 08:24:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:13.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:13.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:13 np0005466030 podman[252934]: 2025-10-02 12:24:13.811049214 +0000 UTC m=+0.056263240 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:24:13 np0005466030 podman[252935]: 2025-10-02 12:24:13.816999491 +0000 UTC m=+0.059595346 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:24:13 np0005466030 nova_compute[230518]: 2025-10-02 12:24:13.960 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(snap) on rbd image(cf745836-410d-40ce-8229-51a950b72ba1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:24:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Oct  2 08:24:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:15.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:15.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image cf745836-410d-40ce-8229-51a950b72ba1 could not be found.
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID cf745836-410d-40ce-8229-51a950b72ba1
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver 
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver 
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image cf745836-410d-40ce-8229-51a950b72ba1 could not be found.
Oct  2 08:24:16 np0005466030 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver #033[00m
Oct  2 08:24:17 np0005466030 nova_compute[230518]: 2025-10-02 12:24:17.200 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(snap) on rbd image(cf745836-410d-40ce-8229-51a950b72ba1) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:24:17 np0005466030 nova_compute[230518]: 2025-10-02 12:24:17.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:17 np0005466030 nova_compute[230518]: 2025-10-02 12:24:17.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:17.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:18Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:14:6c 10.100.0.3
Oct  2 08:24:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:18Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:14:6c 10.100.0.3
Oct  2 08:24:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Oct  2 08:24:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:21 np0005466030 nova_compute[230518]: 2025-10-02 12:24:21.261 2 WARNING nova.compute.manager [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Image not found during snapshot: nova.exception.ImageNotFound: Image cf745836-410d-40ce-8229-51a950b72ba1 could not be found.#033[00m
Oct  2 08:24:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:21.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:21.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.512 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.513 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.513 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.514 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.514 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.515 2 INFO nova.compute.manager [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Terminating instance#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.516 2 DEBUG nova.compute.manager [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:24:22 np0005466030 kernel: tapa27d28bd-0a (unregistering): left promiscuous mode
Oct  2 08:24:22 np0005466030 NetworkManager[44960]: <info>  [1759407862.6556] device (tapa27d28bd-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:24:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:22Z|00251|binding|INFO|Releasing lport a27d28bd-0aac-45b1-9b85-fa648038cccc from this chassis (sb_readonly=0)
Oct  2 08:24:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:22Z|00252|binding|INFO|Setting lport a27d28bd-0aac-45b1-9b85-fa648038cccc down in Southbound
Oct  2 08:24:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:22Z|00253|binding|INFO|Removing iface tapa27d28bd-0a ovn-installed in OVS
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.671 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:14:6c 10.100.0.3'], port_security=['fa:16:3e:bb:14:6c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c1597192-3527-4620-a21f-0e71c9c1c09d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a27d28bd-0aac-45b1-9b85-fa648038cccc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.674 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a27d28bd-0aac-45b1-9b85-fa648038cccc in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis#033[00m
Oct  2 08:24:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.678 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.680 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c5457b-9b03-40ae-a478-358021bb7394]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.684 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct  2 08:24:22 np0005466030 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000036.scope: Consumed 14.045s CPU time.
Oct  2 08:24:22 np0005466030 systemd-machined[188247]: Machine qemu-28-instance-00000036 terminated.
Oct  2 08:24:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : haproxy version is 2.8.14-c23fe91
Oct  2 08:24:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : path to executable is /usr/sbin/haproxy
Oct  2 08:24:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [WARNING]  (252695) : Exiting Master process...
Oct  2 08:24:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [ALERT]    (252695) : Current worker (252697) exited with code 143 (Terminated)
Oct  2 08:24:22 np0005466030 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [WARNING]  (252695) : All workers exited. Exiting... (0)
Oct  2 08:24:22 np0005466030 systemd[1]: libpod-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102.scope: Deactivated successfully.
Oct  2 08:24:22 np0005466030 podman[253051]: 2025-10-02 12:24:22.821172067 +0000 UTC m=+0.051967467 container died 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:24:22 np0005466030 systemd[1]: var-lib-containers-storage-overlay-db8253beedaf47a6624a8aab6d6b150b50161b15aea48a77330b21dba701fe6f-merged.mount: Deactivated successfully.
Oct  2 08:24:22 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102-userdata-shm.mount: Deactivated successfully.
Oct  2 08:24:22 np0005466030 podman[253051]: 2025-10-02 12:24:22.85590837 +0000 UTC m=+0.086703760 container cleanup 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:24:22 np0005466030 systemd[1]: libpod-conmon-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102.scope: Deactivated successfully.
Oct  2 08:24:22 np0005466030 podman[253081]: 2025-10-02 12:24:22.929389362 +0000 UTC m=+0.048217529 container remove 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.938 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[be2436b1-6bcd-4d43-a9b7-564dcbe373e7]: (4, ('Thu Oct  2 12:24:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102)\n3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102\nThu Oct  2 12:24:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102)\n3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.939 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[812224ca-711a-4c30-b1a9-34597da8ceb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.941 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.956 2 INFO nova.virt.libvirt.driver [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance destroyed successfully.#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.957 2 DEBUG nova.objects.instance [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid c1597192-3527-4620-a21f-0e71c9c1c09d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:22 np0005466030 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.969 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4398c09-d69a-4bec-84d5-8c7d493356de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.975 2 DEBUG nova.virt.libvirt.vif [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-984879556',display_name='tempest-ImagesTestJSON-server-984879556',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-984879556',id=54,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0xmwc9tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:24:21Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=c1597192-3527-4620-a21f-0e71c9c1c09d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.976 2 DEBUG nova.network.os_vif_util [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.980 2 DEBUG nova.network.os_vif_util [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.980 2 DEBUG os_vif [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa27d28bd-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:22 np0005466030 nova_compute[230518]: 2025-10-02 12:24:22.988 2 INFO os_vif [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a')#033[00m
Oct  2 08:24:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.998 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27aafe2c-cc2a-46a3-b677-303522690856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.000 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbf7c9e-384e-487e-aeb5-9207ef952d83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.015 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c0757a60-c605-41a0-8285-839a331f741e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570069, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253126, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.017 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:24:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.017 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5735c3-4ec4-4491-95a6-d8d9f15ff335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:23 np0005466030 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct  2 08:24:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:23 np0005466030 nova_compute[230518]: 2025-10-02 12:24:23.225 2 DEBUG nova.compute.manager [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-unplugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:23 np0005466030 nova_compute[230518]: 2025-10-02 12:24:23.226 2 DEBUG oslo_concurrency.lockutils [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:23 np0005466030 nova_compute[230518]: 2025-10-02 12:24:23.227 2 DEBUG oslo_concurrency.lockutils [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:23 np0005466030 nova_compute[230518]: 2025-10-02 12:24:23.227 2 DEBUG oslo_concurrency.lockutils [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:23 np0005466030 nova_compute[230518]: 2025-10-02 12:24:23.228 2 DEBUG nova.compute.manager [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] No waiting events found dispatching network-vif-unplugged-a27d28bd-0aac-45b1-9b85-fa648038cccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:24:23 np0005466030 nova_compute[230518]: 2025-10-02 12:24:23.228 2 DEBUG nova.compute.manager [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-unplugged-a27d28bd-0aac-45b1-9b85-fa648038cccc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:24:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:25 np0005466030 nova_compute[230518]: 2025-10-02 12:24:25.327 2 DEBUG nova.compute.manager [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:25 np0005466030 nova_compute[230518]: 2025-10-02 12:24:25.328 2 DEBUG oslo_concurrency.lockutils [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:25 np0005466030 nova_compute[230518]: 2025-10-02 12:24:25.328 2 DEBUG oslo_concurrency.lockutils [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:25 np0005466030 nova_compute[230518]: 2025-10-02 12:24:25.329 2 DEBUG oslo_concurrency.lockutils [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:25 np0005466030 nova_compute[230518]: 2025-10-02 12:24:25.329 2 DEBUG nova.compute.manager [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] No waiting events found dispatching network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:24:25 np0005466030 nova_compute[230518]: 2025-10-02 12:24:25.329 2 WARNING nova.compute.manager [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received unexpected event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:24:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:25.922 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.351 2 INFO nova.virt.libvirt.driver [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deleting instance files /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d_del#033[00m
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.351 2 INFO nova.virt.libvirt.driver [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deletion of /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d_del complete#033[00m
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.401 2 INFO nova.compute.manager [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 4.88 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.402 2 DEBUG oslo.service.loopingcall [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.402 2 DEBUG nova.compute.manager [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.402 2 DEBUG nova.network.neutron [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:24:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Oct  2 08:24:27 np0005466030 nova_compute[230518]: 2025-10-02 12:24:27.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.130 2 DEBUG nova.network.neutron [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:28.146 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:24:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:28.147 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:24:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.186 2 INFO nova.compute.manager [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 0.78 seconds to deallocate network for instance.#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.231 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.231 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.280 2 DEBUG nova.compute.manager [req-2a4b04c7-85b4-4294-904e-5f9875767de1 req-aec6e49c-efb7-4abb-96c4-174c93c2e510 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-deleted-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.293 2 DEBUG oslo_concurrency.processutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2032694493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.746 2 DEBUG oslo_concurrency.processutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.755 2 DEBUG nova.compute.provider_tree [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.784 2 DEBUG nova.scheduler.client.report [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.815 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:28 np0005466030 nova_compute[230518]: 2025-10-02 12:24:28.905 2 INFO nova.scheduler.client.report [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance c1597192-3527-4620-a21f-0e71c9c1c09d#033[00m
Oct  2 08:24:29 np0005466030 nova_compute[230518]: 2025-10-02 12:24:29.009 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:32 np0005466030 nova_compute[230518]: 2025-10-02 12:24:32.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:32 np0005466030 nova_compute[230518]: 2025-10-02 12:24:32.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:33.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:33.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:35.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:35 np0005466030 podman[253203]: 2025-10-02 12:24:35.809047603 +0000 UTC m=+0.064316004 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:24:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:36.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:37 np0005466030 nova_compute[230518]: 2025-10-02 12:24:37.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:37.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:37.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:37 np0005466030 podman[253222]: 2025-10-02 12:24:37.841583989 +0000 UTC m=+0.098461629 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct  2 08:24:37 np0005466030 nova_compute[230518]: 2025-10-02 12:24:37.954 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407862.9531674, c1597192-3527-4620-a21f-0e71c9c1c09d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:24:37 np0005466030 nova_compute[230518]: 2025-10-02 12:24:37.955 2 INFO nova.compute.manager [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.000 2 DEBUG nova.compute.manager [None req-90c98255-3046-41e1-b843-5d4983b69ad8 - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.620 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.620 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.666 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.773 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.774 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.785 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:24:38 np0005466030 nova_compute[230518]: 2025-10-02 12:24:38.785 2 INFO nova.compute.claims [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.065 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2860898462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.536 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.543 2 DEBUG nova.compute.provider_tree [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.569 2 DEBUG nova.scheduler.client.report [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.613 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.614 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.679 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.680 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.710 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:24:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:39.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.767 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:24:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:39.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.908 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.909 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.910 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Creating image(s)#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.937 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:39 np0005466030 nova_compute[230518]: 2025-10-02 12:24:39.972 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.091 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.097 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.140 2 DEBUG nova.policy [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51b45ef40bdc499a8409fd2bf3e6a339', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.187 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.188 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.189 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.190 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.224 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:40 np0005466030 nova_compute[230518]: 2025-10-02 12:24:40.229 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 12ae9024-48e3-4894-ac32-41af4e31c223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.106 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.107 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.107 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.108 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.109 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1773491671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.699 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 12ae9024-48e3-4894-ac32-41af4e31c223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:41.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.791 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:41 np0005466030 nova_compute[230518]: 2025-10-02 12:24:41.871 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] resizing rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.008 2 DEBUG nova.objects.instance [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'migration_context' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.054 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.054 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Ensure instance console log exists: /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.055 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.055 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.055 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.150 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.151 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4621MB free_disk=20.897125244140625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.151 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.152 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.248 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 12ae9024-48e3-4894-ac32-41af4e31c223 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.249 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.250 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.294 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.449 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Successfully created port: ac685902-7a16-4ff8-ac8b-85430ba9f8cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:24:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:24:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3235092226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.743 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:42 np0005466030 nova_compute[230518]: 2025-10-02 12:24:42.750 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:24:43 np0005466030 nova_compute[230518]: 2025-10-02 12:24:43.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:43 np0005466030 nova_compute[230518]: 2025-10-02 12:24:43.712 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:24:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:43.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:43 np0005466030 nova_compute[230518]: 2025-10-02 12:24:43.780 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:24:43 np0005466030 nova_compute[230518]: 2025-10-02 12:24:43.781 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:44 np0005466030 nova_compute[230518]: 2025-10-02 12:24:44.777 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:44 np0005466030 nova_compute[230518]: 2025-10-02 12:24:44.778 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:44 np0005466030 nova_compute[230518]: 2025-10-02 12:24:44.778 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:44 np0005466030 podman[253484]: 2025-10-02 12:24:44.834641972 +0000 UTC m=+0.076763826 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  2 08:24:44 np0005466030 podman[253485]: 2025-10-02 12:24:44.84057802 +0000 UTC m=+0.080116283 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.152 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Successfully updated port: ac685902-7a16-4ff8-ac8b-85430ba9f8cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.184 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.184 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.184 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.566 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:24:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:45.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.785 2 DEBUG nova.compute.manager [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.785 2 DEBUG nova.compute.manager [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing instance network info cache due to event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:24:45 np0005466030 nova_compute[230518]: 2025-10-02 12:24:45.785 2 DEBUG oslo_concurrency.lockutils [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.214 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.250 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.251 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance network_info: |[{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.251 2 DEBUG oslo_concurrency.lockutils [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.251 2 DEBUG nova.network.neutron [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.255 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start _get_guest_xml network_info=[{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.264 2 WARNING nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.280 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.281 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.287 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.288 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.290 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.291 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.292 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.293 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.294 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.294 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.295 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.296 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.296 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.297 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.297 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.298 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.304 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:24:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1523422355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:24:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.765 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.792 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:47 np0005466030 nova_compute[230518]: 2025-10-02 12:24:47.797 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:24:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3165033264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.244 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.247 2 DEBUG nova.virt.libvirt.vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:24:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-857840689',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-857840689',id=55,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-92a9mnnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=12ae9024-48e3-4894-ac32-41af4e31c223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.247 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.248 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.250 2 DEBUG nova.objects.instance [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.463 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <uuid>12ae9024-48e3-4894-ac32-41af4e31c223</uuid>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <name>instance-00000037</name>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-857840689</nova:name>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:24:47</nova:creationTime>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:user uuid="51b45ef40bdc499a8409fd2bf3e6a339">tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member</nova:user>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:project uuid="12dfeaa31a6e4a2481a5332ce3094262">tempest-UpdateMultiattachVolumeNegativeTest-158673309</nova:project>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <nova:port uuid="ac685902-7a16-4ff8-ac8b-85430ba9f8cd">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <entry name="serial">12ae9024-48e3-4894-ac32-41af4e31c223</entry>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <entry name="uuid">12ae9024-48e3-4894-ac32-41af4e31c223</entry>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/12ae9024-48e3-4894-ac32-41af4e31c223_disk">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/12ae9024-48e3-4894-ac32-41af4e31c223_disk.config">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:2e:09:31"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <target dev="tapac685902-7a"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/console.log" append="off"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:24:48 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:24:48 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:24:48 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:24:48 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.465 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Preparing to wait for external event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.466 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.467 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.467 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.468 2 DEBUG nova.virt.libvirt.vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:24:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-857840689',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-857840689',id=55,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-92a9mnnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=12ae9024-48e3-4894-ac32-41af4e31c223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.469 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.470 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.470 2 DEBUG os_vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac685902-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac685902-7a, col_values=(('external_ids', {'iface-id': 'ac685902-7a16-4ff8-ac8b-85430ba9f8cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:09:31', 'vm-uuid': '12ae9024-48e3-4894-ac32-41af4e31c223'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:48 np0005466030 NetworkManager[44960]: <info>  [1759407888.4872] manager: (tapac685902-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.495 2 INFO os_vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a')#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.563 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.564 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.564 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No VIF found with MAC fa:16:3e:2e:09:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.564 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Using config drive#033[00m
Oct  2 08:24:48 np0005466030 nova_compute[230518]: 2025-10-02 12:24:48.594 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:49 np0005466030 nova_compute[230518]: 2025-10-02 12:24:49.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:24:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:49 np0005466030 nova_compute[230518]: 2025-10-02 12:24:49.813 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Creating config drive at /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config#033[00m
Oct  2 08:24:49 np0005466030 nova_compute[230518]: 2025-10-02 12:24:49.822 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp18hx5xwp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:49 np0005466030 nova_compute[230518]: 2025-10-02 12:24:49.956 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp18hx5xwp" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:49 np0005466030 nova_compute[230518]: 2025-10-02 12:24:49.987 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:24:49 np0005466030 nova_compute[230518]: 2025-10-02 12:24:49.991 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:24:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.158 2 DEBUG nova.network.neutron [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updated VIF entry in instance network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.159 2 DEBUG nova.network.neutron [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.427 2 DEBUG oslo_concurrency.lockutils [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.894 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.903s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.895 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deleting local config drive /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config because it was imported into RBD.#033[00m
Oct  2 08:24:50 np0005466030 kernel: tapac685902-7a: entered promiscuous mode
Oct  2 08:24:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:50Z|00254|binding|INFO|Claiming lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd for this chassis.
Oct  2 08:24:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:50Z|00255|binding|INFO|ac685902-7a16-4ff8-ac8b-85430ba9f8cd: Claiming fa:16:3e:2e:09:31 10.100.0.10
Oct  2 08:24:50 np0005466030 NetworkManager[44960]: <info>  [1759407890.9594] manager: (tapac685902-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:50 np0005466030 nova_compute[230518]: 2025-10-02 12:24:50.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:50 np0005466030 NetworkManager[44960]: <info>  [1759407890.9814] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Oct  2 08:24:50 np0005466030 NetworkManager[44960]: <info>  [1759407890.9828] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct  2 08:24:50 np0005466030 systemd-udevd[253656]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:24:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:50.994 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:09:31 10.100.0.10'], port_security=['fa:16:3e:2e:09:31 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '12ae9024-48e3-4894-ac32-41af4e31c223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34ecce08-278a-4a16-9f99-cfef8148769d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b0b284f-6afe-4611-b8db-1ab4d5466651', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7fc2557b-b462-4493-9e4f-7b4266aaba5c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ac685902-7a16-4ff8-ac8b-85430ba9f8cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:24:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:50.995 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ac685902-7a16-4ff8-ac8b-85430ba9f8cd in datapath 34ecce08-278a-4a16-9f99-cfef8148769d bound to our chassis#033[00m
Oct  2 08:24:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:50.997 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34ecce08-278a-4a16-9f99-cfef8148769d#033[00m
Oct  2 08:24:51 np0005466030 NetworkManager[44960]: <info>  [1759407891.0027] device (tapac685902-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:24:51 np0005466030 NetworkManager[44960]: <info>  [1759407891.0041] device (tapac685902-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.009 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0aaf38ab-f828-4e95-ab27-eee6ac9a3891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.010 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34ecce08-21 in ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.012 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34ecce08-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.012 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee86d684-34ba-4fd5-a3e9-af04492136c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.013 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45a777b8-93f1-48cb-aa0f-17c4a31fd1a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 systemd-machined[188247]: New machine qemu-29-instance-00000037.
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.026 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b022c88c-73ec-4152-a164-95c4b415ee2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 systemd[1]: Started Virtual Machine qemu-29-instance-00000037.
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.055 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ca7b34-a583-43c0-b708-b9e34d8fcc43]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.090 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[445611b7-7870-4e06-91e4-eda3300f0a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 NetworkManager[44960]: <info>  [1759407891.1018] manager: (tap34ecce08-20): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.100 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01503bf2-b08c-478b-9e07-98363f9496ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.142 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6e53b694-7038-41ab-9adf-72771922f068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.144 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a4f607-f0b5-424f-9b11-811de65401dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 NetworkManager[44960]: <info>  [1759407891.1676] device (tap34ecce08-20): carrier: link connected
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.173 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[110042b2-a806-475d-91c4-a34776757bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.191 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7854ca5-d506-4f3c-8d84-7fa112c44248]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34ecce08-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c7:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575472, 'reachable_time': 21849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253692, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.209 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[042e4d88-881d-4692-abaa-d9e8bf20d412]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:c771'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575472, 'tstamp': 575472}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253693, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.228 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f31d32ae-ac2b-45a7-ac78-135a1c5e0509]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34ecce08-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c7:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575472, 'reachable_time': 21849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253694, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.261 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ccb309-57e0-42cd-bbb5-940d16ddd88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 nova_compute[230518]: 2025-10-02 12:24:51.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005466030 nova_compute[230518]: 2025-10-02 12:24:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:51Z|00256|binding|INFO|Setting lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd ovn-installed in OVS
Oct  2 08:24:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:51Z|00257|binding|INFO|Setting lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd up in Southbound
Oct  2 08:24:51 np0005466030 nova_compute[230518]: 2025-10-02 12:24:51.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.339 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4f246cce-1ce6-4822-bbc3-164a6ff10dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.340 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34ecce08-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.340 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.340 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34ecce08-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:51 np0005466030 kernel: tap34ecce08-20: entered promiscuous mode
Oct  2 08:24:51 np0005466030 nova_compute[230518]: 2025-10-02 12:24:51.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005466030 NetworkManager[44960]: <info>  [1759407891.3450] manager: (tap34ecce08-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct  2 08:24:51 np0005466030 nova_compute[230518]: 2025-10-02 12:24:51.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.347 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34ecce08-20, col_values=(('external_ids', {'iface-id': '8d2c214b-08f8-42fc-8049-39454e430512'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:24:51 np0005466030 nova_compute[230518]: 2025-10-02 12:24:51.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:24:51Z|00258|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=1)
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.349 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0da5d130-ebe1-4a63-8fc2-a54bb224e426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.351 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-34ecce08-278a-4a16-9f99-cfef8148769d
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 34ecce08-278a-4a16-9f99-cfef8148769d
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:24:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.351 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'env', 'PROCESS_TAG=haproxy-34ecce08-278a-4a16-9f99-cfef8148769d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34ecce08-278a-4a16-9f99-cfef8148769d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:24:51 np0005466030 nova_compute[230518]: 2025-10-02 12:24:51.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:51 np0005466030 podman[253726]: 2025-10-02 12:24:51.748708661 +0000 UTC m=+0.062484027 container create 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:24:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:51.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:51 np0005466030 systemd[1]: Started libpod-conmon-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea.scope.
Oct  2 08:24:51 np0005466030 podman[253726]: 2025-10-02 12:24:51.707766212 +0000 UTC m=+0.021541598 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:24:51 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:24:51 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a1fb007e30a62109ba083d1b17ba4b02cd8951cd7c632e42029e9dfea28755/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:24:51 np0005466030 podman[253726]: 2025-10-02 12:24:51.85070967 +0000 UTC m=+0.164485076 container init 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:24:51 np0005466030 podman[253726]: 2025-10-02 12:24:51.857091251 +0000 UTC m=+0.170866657 container start 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:24:51 np0005466030 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : New worker (253747) forked
Oct  2 08:24:51 np0005466030 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : Loading success.
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.639 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407892.6389408, 12ae9024-48e3-4894-ac32-41af4e31c223 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.640 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Started (Lifecycle Event)#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.672 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.677 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407892.6391075, 12ae9024-48e3-4894-ac32-41af4e31c223 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.677 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.755 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.760 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.764 2 DEBUG nova.compute.manager [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.765 2 DEBUG oslo_concurrency.lockutils [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.765 2 DEBUG oslo_concurrency.lockutils [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.765 2 DEBUG oslo_concurrency.lockutils [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.766 2 DEBUG nova.compute.manager [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Processing event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.767 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.771 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.774 2 INFO nova.virt.libvirt.driver [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance spawned successfully.#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.775 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.808 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.809 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407892.7700608, 12ae9024-48e3-4894-ac32-41af4e31c223 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.809 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.842 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.843 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.843 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.844 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.844 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.845 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.880 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.884 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.956 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.978 2 INFO nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 13.07 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:24:52 np0005466030 nova_compute[230518]: 2025-10-02 12:24:52.978 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:24:53 np0005466030 nova_compute[230518]: 2025-10-02 12:24:53.093 2 INFO nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 14.35 seconds to build instance.#033[00m
Oct  2 08:24:53 np0005466030 nova_compute[230518]: 2025-10-02 12:24:53.118 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:53 np0005466030 nova_compute[230518]: 2025-10-02 12:24:53.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:24:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:24:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:53.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:55 np0005466030 nova_compute[230518]: 2025-10-02 12:24:55.052 2 DEBUG nova.compute.manager [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:24:55 np0005466030 nova_compute[230518]: 2025-10-02 12:24:55.053 2 DEBUG oslo_concurrency.lockutils [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:24:55 np0005466030 nova_compute[230518]: 2025-10-02 12:24:55.053 2 DEBUG oslo_concurrency.lockutils [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:24:55 np0005466030 nova_compute[230518]: 2025-10-02 12:24:55.054 2 DEBUG oslo_concurrency.lockutils [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:24:55 np0005466030 nova_compute[230518]: 2025-10-02 12:24:55.054 2 DEBUG nova.compute.manager [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] No waiting events found dispatching network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:24:55 np0005466030 nova_compute[230518]: 2025-10-02 12:24:55.055 2 WARNING nova.compute.manager [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received unexpected event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd for instance with vm_state active and task_state None.#033[00m
Oct  2 08:24:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:57 np0005466030 nova_compute[230518]: 2025-10-02 12:24:57.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Oct  2 08:24:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:24:58 np0005466030 nova_compute[230518]: 2025-10-02 12:24:58.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:24:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:59.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:24:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:24:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:24:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:01 np0005466030 nova_compute[230518]: 2025-10-02 12:25:01.208 2 DEBUG nova.compute.manager [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:01 np0005466030 nova_compute[230518]: 2025-10-02 12:25:01.208 2 DEBUG nova.compute.manager [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing instance network info cache due to event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:25:01 np0005466030 nova_compute[230518]: 2025-10-02 12:25:01.209 2 DEBUG oslo_concurrency.lockutils [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:01 np0005466030 nova_compute[230518]: 2025-10-02 12:25:01.209 2 DEBUG oslo_concurrency.lockutils [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:01 np0005466030 nova_compute[230518]: 2025-10-02 12:25:01.209 2 DEBUG nova.network.neutron [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:25:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:01.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:02 np0005466030 nova_compute[230518]: 2025-10-02 12:25:02.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:03 np0005466030 nova_compute[230518]: 2025-10-02 12:25:03.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:04 np0005466030 nova_compute[230518]: 2025-10-02 12:25:04.664 2 DEBUG nova.network.neutron [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updated VIF entry in instance network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:25:04 np0005466030 nova_compute[230518]: 2025-10-02 12:25:04.665 2 DEBUG nova.network.neutron [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:04 np0005466030 nova_compute[230518]: 2025-10-02 12:25:04.695 2 DEBUG oslo_concurrency.lockutils [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:25:05Z|00259|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:25:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:05.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:05 np0005466030 nova_compute[230518]: 2025-10-02 12:25:05.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:06 np0005466030 podman[253799]: 2025-10-02 12:25:06.863801736 +0000 UTC m=+0.105829881 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 08:25:07 np0005466030 nova_compute[230518]: 2025-10-02 12:25:07.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:25:07Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:09:31 10.100.0.10
Oct  2 08:25:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:25:07Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:09:31 10.100.0.10
Oct  2 08:25:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:07.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:08 np0005466030 nova_compute[230518]: 2025-10-02 12:25:08.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:08 np0005466030 podman[253816]: 2025-10-02 12:25:08.848210337 +0000 UTC m=+0.101471704 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:25:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:09.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:12 np0005466030 nova_compute[230518]: 2025-10-02 12:25:12.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:13 np0005466030 nova_compute[230518]: 2025-10-02 12:25:13.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:25:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:13.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:25:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:15.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:15 np0005466030 podman[253844]: 2025-10-02 12:25:15.811904828 +0000 UTC m=+0.055087885 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:25:15 np0005466030 podman[253843]: 2025-10-02 12:25:15.812400733 +0000 UTC m=+0.061187956 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:25:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:25:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:15.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:25:17 np0005466030 nova_compute[230518]: 2025-10-02 12:25:17.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:25:17Z|00260|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct  2 08:25:17 np0005466030 nova_compute[230518]: 2025-10-02 12:25:17.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:17.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:18 np0005466030 nova_compute[230518]: 2025-10-02 12:25:18.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:18 np0005466030 nova_compute[230518]: 2025-10-02 12:25:18.997 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:18 np0005466030 nova_compute[230518]: 2025-10-02 12:25:18.998 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:19 np0005466030 nova_compute[230518]: 2025-10-02 12:25:19.312 2 DEBUG nova.objects.instance [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:19 np0005466030 nova_compute[230518]: 2025-10-02 12:25:19.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:19.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:19 np0005466030 nova_compute[230518]: 2025-10-02 12:25:19.831 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:19.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:20 np0005466030 nova_compute[230518]: 2025-10-02 12:25:20.871 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:20 np0005466030 nova_compute[230518]: 2025-10-02 12:25:20.872 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:20 np0005466030 nova_compute[230518]: 2025-10-02 12:25:20.872 2 INFO nova.compute.manager [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attaching volume 5957fb80-298a-4379-a0ba-fde86e2113d0 to /dev/vdb#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.304 2 DEBUG os_brick.utils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.306 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.324 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.325 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3e100502-0bf3-41d3-9e34-a8df172827d5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.326 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.340 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.340 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[39d68e2b-f7a8-46c8-a5b2-293a97f69079]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.343 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.356 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.357 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[049edf27-8f24-4a51-8d6c-f13c1c005210]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.359 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[95288274-67dd-4c6d-b7d1-d849a0040acc]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.360 2 DEBUG oslo_concurrency.processutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.405 2 DEBUG oslo_concurrency.processutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.408 2 DEBUG os_brick.initiator.connectors.lightos [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.409 2 DEBUG os_brick.initiator.connectors.lightos [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.410 2 DEBUG os_brick.initiator.connectors.lightos [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.410 2 DEBUG os_brick.utils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:25:21 np0005466030 nova_compute[230518]: 2025-10-02 12:25:21.411 2 DEBUG nova.virt.block_device [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating existing volume attachment record: 220f75d1-7597-411c-a311-ebd7f4b718f8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:25:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:22 np0005466030 nova_compute[230518]: 2025-10-02 12:25:22.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.004 2 DEBUG nova.objects.instance [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.086 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attempting to attach volume 5957fb80-298a-4379-a0ba-fde86e2113d0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.089 2 DEBUG nova.virt.libvirt.guest [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct  2 08:25:23 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 08:25:23 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  </auth>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct  2 08:25:23 np0005466030 nova_compute[230518]:  <shareable/>
Oct  2 08:25:23 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:25:23 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:25:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.462 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.463 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.463 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.464 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No VIF found with MAC fa:16:3e:2e:09:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:25:23 np0005466030 nova_compute[230518]: 2025-10-02 12:25:23.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:23.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:24 np0005466030 nova_compute[230518]: 2025-10-02 12:25:24.590 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:27 np0005466030 nova_compute[230518]: 2025-10-02 12:25:27.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:27.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:27.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:25:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:25:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:25:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:28 np0005466030 nova_compute[230518]: 2025-10-02 12:25:28.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:29.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:29.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:31.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:32 np0005466030 nova_compute[230518]: 2025-10-02 12:25:32.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:33 np0005466030 nova_compute[230518]: 2025-10-02 12:25:33.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:33.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:34 np0005466030 nova_compute[230518]: 2025-10-02 12:25:34.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:35.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:37 np0005466030 nova_compute[230518]: 2025-10-02 12:25:37.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:37 np0005466030 nova_compute[230518]: 2025-10-02 12:25:37.755 2 DEBUG oslo_concurrency.lockutils [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:37 np0005466030 nova_compute[230518]: 2025-10-02 12:25:37.755 2 DEBUG oslo_concurrency.lockutils [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:37 np0005466030 nova_compute[230518]: 2025-10-02 12:25:37.784 2 INFO nova.compute.manager [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Detaching volume 5957fb80-298a-4379-a0ba-fde86e2113d0#033[00m
Oct  2 08:25:37 np0005466030 podman[254041]: 2025-10-02 12:25:37.815426545 +0000 UTC m=+0.064717456 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:25:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:37.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:37.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.037 2 INFO nova.virt.block_device [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attempting to driver detach volume 5957fb80-298a-4379-a0ba-fde86e2113d0 from mountpoint /dev/vdb#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.047 2 DEBUG nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Attempting to detach device vdb from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.048 2 DEBUG nova.virt.libvirt.guest [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct  2 08:25:38 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <shareable/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:25:38 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.055 2 INFO nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully detached device vdb from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the persistent domain config.#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.056 2 DEBUG nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.056 2 DEBUG nova.virt.libvirt.guest [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct  2 08:25:38 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <shareable/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:25:38 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:25:38 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.165 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759407938.1650586, 12ae9024-48e3-4894-ac32-41af4e31c223 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.168 2 DEBUG nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 12ae9024-48e3-4894-ac32-41af4e31c223 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.171 2 INFO nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully detached device vdb from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the live domain config.#033[00m
Oct  2 08:25:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.684 2 DEBUG nova.objects.instance [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:38 np0005466030 nova_compute[230518]: 2025-10-02 12:25:38.759 2 DEBUG oslo_concurrency.lockutils [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:39 np0005466030 podman[254086]: 2025-10-02 12:25:39.212822825 +0000 UTC m=+0.146992686 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:25:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:25:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 21K writes, 82K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 21K writes, 7216 syncs, 2.97 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9633 writes, 36K keys, 9633 commit groups, 1.0 writes per commit group, ingest: 33.79 MB, 0.06 MB/s#012Interval WAL: 9633 writes, 3951 syncs, 2.44 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:25:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:25:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:25:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:39.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:39.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.191 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.192 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.193 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.193 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.194 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:25:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3840614406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.698 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:25:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:41.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.868 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:25:41 np0005466030 nova_compute[230518]: 2025-10-02 12:25:41.868 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:25:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:41.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.028 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.029 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4471MB free_disk=20.89710235595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.030 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.030 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.223 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 12ae9024-48e3-4894-ac32-41af4e31c223 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.224 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.224 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:42.399 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:42.400 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:42.401 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.610 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.715 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.716 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.753 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.801 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:25:42 np0005466030 nova_compute[230518]: 2025-10-02 12:25:42.851 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:25:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1350195972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:25:43 np0005466030 nova_compute[230518]: 2025-10-02 12:25:43.293 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:43 np0005466030 nova_compute[230518]: 2025-10-02 12:25:43.298 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:25:43 np0005466030 nova_compute[230518]: 2025-10-02 12:25:43.540 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:25:43 np0005466030 nova_compute[230518]: 2025-10-02 12:25:43.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:43.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.840 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.841 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.841 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.841 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.842 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.843 2 INFO nova.compute.manager [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Terminating instance#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.845 2 DEBUG nova.compute.manager [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:25:44 np0005466030 kernel: tapac685902-7a (unregistering): left promiscuous mode
Oct  2 08:25:44 np0005466030 NetworkManager[44960]: <info>  [1759407944.9080] device (tapac685902-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:25:44Z|00261|binding|INFO|Releasing lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd from this chassis (sb_readonly=0)
Oct  2 08:25:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:25:44Z|00262|binding|INFO|Setting lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd down in Southbound
Oct  2 08:25:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:25:44Z|00263|binding|INFO|Removing iface tapac685902-7a ovn-installed in OVS
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:44 np0005466030 nova_compute[230518]: 2025-10-02 12:25:44.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:44 np0005466030 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct  2 08:25:44 np0005466030 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000037.scope: Consumed 16.459s CPU time.
Oct  2 08:25:44 np0005466030 systemd-machined[188247]: Machine qemu-29-instance-00000037 terminated.
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.034 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.034 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.035 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.035 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.079 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:09:31 10.100.0.10'], port_security=['fa:16:3e:2e:09:31 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '12ae9024-48e3-4894-ac32-41af4e31c223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34ecce08-278a-4a16-9f99-cfef8148769d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b0b284f-6afe-4611-b8db-1ab4d5466651', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7fc2557b-b462-4493-9e4f-7b4266aaba5c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ac685902-7a16-4ff8-ac8b-85430ba9f8cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.081 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ac685902-7a16-4ff8-ac8b-85430ba9f8cd in datapath 34ecce08-278a-4a16-9f99-cfef8148769d unbound from our chassis#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.085 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34ecce08-278a-4a16-9f99-cfef8148769d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.088 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47e381a8-ab00-47ff-91c6-5a1a7b2e150d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.087 2 INFO nova.virt.libvirt.driver [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance destroyed successfully.#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.088 2 DEBUG nova.objects.instance [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'resources' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.089 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d namespace which is not needed anymore#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.140 2 DEBUG nova.virt.libvirt.vif [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-857840689',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-857840689',id=55,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:24:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-92a9mnnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:24:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=12ae9024-48e3-4894-ac32-41af4e31c223,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.141 2 DEBUG nova.network.os_vif_util [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.142 2 DEBUG nova.network.os_vif_util [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.142 2 DEBUG os_vif [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac685902-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.149 2 INFO os_vif [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a')#033[00m
Oct  2 08:25:45 np0005466030 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : haproxy version is 2.8.14-c23fe91
Oct  2 08:25:45 np0005466030 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : path to executable is /usr/sbin/haproxy
Oct  2 08:25:45 np0005466030 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [WARNING]  (253745) : Exiting Master process...
Oct  2 08:25:45 np0005466030 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [ALERT]    (253745) : Current worker (253747) exited with code 143 (Terminated)
Oct  2 08:25:45 np0005466030 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [WARNING]  (253745) : All workers exited. Exiting... (0)
Oct  2 08:25:45 np0005466030 systemd[1]: libpod-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea.scope: Deactivated successfully.
Oct  2 08:25:45 np0005466030 podman[254234]: 2025-10-02 12:25:45.255168271 +0000 UTC m=+0.053503205 container died 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:25:45 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea-userdata-shm.mount: Deactivated successfully.
Oct  2 08:25:45 np0005466030 systemd[1]: var-lib-containers-storage-overlay-b1a1fb007e30a62109ba083d1b17ba4b02cd8951cd7c632e42029e9dfea28755-merged.mount: Deactivated successfully.
Oct  2 08:25:45 np0005466030 podman[254234]: 2025-10-02 12:25:45.29740012 +0000 UTC m=+0.095735044 container cleanup 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:25:45 np0005466030 systemd[1]: libpod-conmon-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea.scope: Deactivated successfully.
Oct  2 08:25:45 np0005466030 podman[254267]: 2025-10-02 12:25:45.36002565 +0000 UTC m=+0.041202317 container remove 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.365 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79c31022-9a68-40dc-9394-3327f6757929]: (4, ('Thu Oct  2 12:25:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d (597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea)\n597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea\nThu Oct  2 12:25:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d (597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea)\n597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.367 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b808c221-f729-4c76-beac-f16be1cb1da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.368 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34ecce08-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:25:45 np0005466030 kernel: tap34ecce08-20: left promiscuous mode
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.375 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f139053f-e3e3-4339-97c0-f60d3c0b48ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5a02cb80-b21f-4334-8943-fae62d06b0e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.409 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fec14c-119a-4c1b-b873-a00b3bb6557d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.428 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[867e46b4-0103-4b0b-8b57-25fc6448ea4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575464, 'reachable_time': 35895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254284, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 systemd[1]: run-netns-ovnmeta\x2d34ecce08\x2d278a\x2d4a16\x2d9f99\x2dcfef8148769d.mount: Deactivated successfully.
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.432 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:25:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.433 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fe82c7-2fea-43ed-aa52-481bb515ffe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.777 2 INFO nova.virt.libvirt.driver [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deleting instance files /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223_del#033[00m
Oct  2 08:25:45 np0005466030 nova_compute[230518]: 2025-10-02 12:25:45.778 2 INFO nova.virt.libvirt.driver [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deletion of /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223_del complete#033[00m
Oct  2 08:25:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:45.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.227 2 INFO nova.compute.manager [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 1.38 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.228 2 DEBUG oslo.service.loopingcall [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.228 2 DEBUG nova.compute.manager [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.229 2 DEBUG nova.network.neutron [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.716 2 DEBUG nova.compute.manager [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-unplugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.716 2 DEBUG oslo_concurrency.lockutils [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.717 2 DEBUG oslo_concurrency.lockutils [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.718 2 DEBUG oslo_concurrency.lockutils [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.718 2 DEBUG nova.compute.manager [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] No waiting events found dispatching network-vif-unplugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:46 np0005466030 nova_compute[230518]: 2025-10-02 12:25:46.719 2 DEBUG nova.compute.manager [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-unplugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:25:46 np0005466030 podman[254286]: 2025-10-02 12:25:46.814026543 +0000 UTC m=+0.064472890 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:25:46 np0005466030 podman[254287]: 2025-10-02 12:25:46.823076387 +0000 UTC m=+0.070428707 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:25:47 np0005466030 nova_compute[230518]: 2025-10-02 12:25:47.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:47.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.102 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.525 2 DEBUG nova.network.neutron [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.560 2 INFO nova.compute.manager [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 2.33 seconds to deallocate network for instance.#033[00m
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.670 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.671 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.749 2 DEBUG oslo_concurrency.processutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:25:48 np0005466030 nova_compute[230518]: 2025-10-02 12:25:48.979 2 DEBUG nova.compute.manager [req-a9963c2f-1ad0-49bc-b39d-78d3291b4216 req-aed2599f-c31c-44f3-93f0-c2ab91420ae4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-deleted-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:25:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:25:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2837588613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.227 2 DEBUG oslo_concurrency.processutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.234 2 DEBUG nova.compute.provider_tree [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.259 2 DEBUG nova.scheduler.client.report [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.279 2 DEBUG nova.compute.manager [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.280 2 DEBUG oslo_concurrency.lockutils [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.280 2 DEBUG oslo_concurrency.lockutils [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.281 2 DEBUG oslo_concurrency.lockutils [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.281 2 DEBUG nova.compute.manager [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] No waiting events found dispatching network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.282 2 WARNING nova.compute.manager [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received unexpected event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.338 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.385 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.386 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.386 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.386 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.402 2 INFO nova.scheduler.client.report [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Deleted allocations for instance 12ae9024-48e3-4894-ac32-41af4e31c223#033[00m
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.708 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:25:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:49.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:49 np0005466030 nova_compute[230518]: 2025-10-02 12:25:49.958 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:25:50 np0005466030 nova_compute[230518]: 2025-10-02 12:25:50.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:50 np0005466030 nova_compute[230518]: 2025-10-02 12:25:50.614 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:25:50 np0005466030 nova_compute[230518]: 2025-10-02 12:25:50.665 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:25:50 np0005466030 nova_compute[230518]: 2025-10-02 12:25:50.665 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:25:50 np0005466030 nova_compute[230518]: 2025-10-02 12:25:50.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:51 np0005466030 nova_compute[230518]: 2025-10-02 12:25:51.080 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:25:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:51.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:52 np0005466030 nova_compute[230518]: 2025-10-02 12:25:52.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:53.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:53.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:55 np0005466030 nova_compute[230518]: 2025-10-02 12:25:55.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:25:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:25:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:57 np0005466030 nova_compute[230518]: 2025-10-02 12:25:57.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:57 np0005466030 nova_compute[230518]: 2025-10-02 12:25:57.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:25:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:57.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:25:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:57.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:25:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:25:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:59.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:25:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:25:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:25:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:00 np0005466030 nova_compute[230518]: 2025-10-02 12:26:00.084 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407945.0832973, 12ae9024-48e3-4894-ac32-41af4e31c223 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:26:00 np0005466030 nova_compute[230518]: 2025-10-02 12:26:00.085 2 INFO nova.compute.manager [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:26:00 np0005466030 nova_compute[230518]: 2025-10-02 12:26:00.115 2 DEBUG nova.compute.manager [None req-eed75ee4-2817-4320-b171-2f7f3a72f4f9 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:26:00 np0005466030 nova_compute[230518]: 2025-10-02 12:26:00.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:26:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:26:02 np0005466030 nova_compute[230518]: 2025-10-02 12:26:02.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:03.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:05 np0005466030 nova_compute[230518]: 2025-10-02 12:26:05.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:05.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:06 np0005466030 nova_compute[230518]: 2025-10-02 12:26:06.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:06 np0005466030 nova_compute[230518]: 2025-10-02 12:26:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:26:06 np0005466030 nova_compute[230518]: 2025-10-02 12:26:06.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:26:07 np0005466030 nova_compute[230518]: 2025-10-02 12:26:07.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:08 np0005466030 podman[254351]: 2025-10-02 12:26:08.843533213 +0000 UTC m=+0.081536397 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:26:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:26:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364927460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:26:09 np0005466030 podman[254370]: 2025-10-02 12:26:09.821452815 +0000 UTC m=+0.074647200 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:26:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:09.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:09.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:10 np0005466030 nova_compute[230518]: 2025-10-02 12:26:10.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:26:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6719 writes, 34K keys, 6719 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 6719 writes, 6719 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1644 writes, 8305 keys, 1644 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s#012Interval WAL: 1644 writes, 1644 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     90.8      0.46              0.11        18    0.025       0      0       0.0       0.0#012  L6      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    149.9    123.9      1.21              0.40        17    0.071     86K   9957       0.0       0.0#012 Sum      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6    108.8    114.8      1.66              0.51        35    0.048     86K   9957       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    145.4    149.4      0.34              0.14         8    0.043     24K   3114       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    149.9    123.9      1.21              0.40        17    0.071     86K   9957       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     91.2      0.45              0.11        17    0.027       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.08 MB/s write, 0.18 GB read, 0.08 MB/s read, 1.7 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 19.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000166 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1141,18.96 MB,6.23563%) FilterBlock(35,248.05 KB,0.079682%) IndexBlock(35,451.27 KB,0.144964%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:26:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:11.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:12 np0005466030 nova_compute[230518]: 2025-10-02 12:26:12.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:13.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:15 np0005466030 nova_compute[230518]: 2025-10-02 12:26:15.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:15 np0005466030 nova_compute[230518]: 2025-10-02 12:26:15.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:15 np0005466030 nova_compute[230518]: 2025-10-02 12:26:15.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:15.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:15.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:17 np0005466030 nova_compute[230518]: 2025-10-02 12:26:17.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:17 np0005466030 podman[254397]: 2025-10-02 12:26:17.795304721 +0000 UTC m=+0.053726021 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001)
Oct  2 08:26:17 np0005466030 podman[254398]: 2025-10-02 12:26:17.818029206 +0000 UTC m=+0.063157219 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Oct  2 08:26:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:17.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:17.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:19.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:20 np0005466030 nova_compute[230518]: 2025-10-02 12:26:20.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:21.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:21.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:22 np0005466030 nova_compute[230518]: 2025-10-02 12:26:22.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:23.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:23.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:25 np0005466030 nova_compute[230518]: 2025-10-02 12:26:25.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:26:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:26:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:26:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:25.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:27 np0005466030 nova_compute[230518]: 2025-10-02 12:26:27.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:27.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:27.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:29.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:29.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:30 np0005466030 nova_compute[230518]: 2025-10-02 12:26:30.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:31.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:32 np0005466030 nova_compute[230518]: 2025-10-02 12:26:32.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:33.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:33.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:35 np0005466030 nova_compute[230518]: 2025-10-02 12:26:35.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:35.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:35.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:37 np0005466030 nova_compute[230518]: 2025-10-02 12:26:37.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:39 np0005466030 podman[254460]: 2025-10-02 12:26:39.436647456 +0000 UTC m=+0.084617193 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:26:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:39.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:39.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:40 np0005466030 nova_compute[230518]: 2025-10-02 12:26:40.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:40 np0005466030 podman[254600]: 2025-10-02 12:26:40.405650257 +0000 UTC m=+0.082147406 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 08:26:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.110 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.111 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.112 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4063761561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.575 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.724 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4683MB free_disk=20.942699432373047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.725 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:26:41 np0005466030 nova_compute[230518]: 2025-10-02 12:26:41.725 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:26:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:41.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:26:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:26:42 np0005466030 nova_compute[230518]: 2025-10-02 12:26:42.286 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:26:42 np0005466030 nova_compute[230518]: 2025-10-02 12:26:42.287 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:26:42 np0005466030 nova_compute[230518]: 2025-10-02 12:26:42.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:42 np0005466030 nova_compute[230518]: 2025-10-02 12:26:42.605 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:26:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:26:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2116819383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:26:43 np0005466030 nova_compute[230518]: 2025-10-02 12:26:43.025 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:26:43 np0005466030 nova_compute[230518]: 2025-10-02 12:26:43.032 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:26:43 np0005466030 nova_compute[230518]: 2025-10-02 12:26:43.052 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:26:43 np0005466030 nova_compute[230518]: 2025-10-02 12:26:43.096 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:26:43 np0005466030 nova_compute[230518]: 2025-10-02 12:26:43.097 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:26:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:43.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:43.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:26:44.901 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:26:44 np0005466030 nova_compute[230518]: 2025-10-02 12:26:44.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:26:44.903 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:26:45 np0005466030 nova_compute[230518]: 2025-10-02 12:26:45.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:45 np0005466030 nova_compute[230518]: 2025-10-02 12:26:45.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:45 np0005466030 nova_compute[230518]: 2025-10-02 12:26:45.063 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:45 np0005466030 nova_compute[230518]: 2025-10-02 12:26:45.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:46 np0005466030 nova_compute[230518]: 2025-10-02 12:26:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:47 np0005466030 nova_compute[230518]: 2025-10-02 12:26:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:47 np0005466030 nova_compute[230518]: 2025-10-02 12:26:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:26:47 np0005466030 nova_compute[230518]: 2025-10-02 12:26:47.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:47.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:47.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:48 np0005466030 podman[254779]: 2025-10-02 12:26:48.803789852 +0000 UTC m=+0.048205938 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:26:48 np0005466030 podman[254778]: 2025-10-02 12:26:48.803611886 +0000 UTC m=+0.049860620 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid)
Oct  2 08:26:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:49.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:50 np0005466030 nova_compute[230518]: 2025-10-02 12:26:50.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:50 np0005466030 nova_compute[230518]: 2025-10-02 12:26:50.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:26:50.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:26:51 np0005466030 nova_compute[230518]: 2025-10-02 12:26:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:51 np0005466030 nova_compute[230518]: 2025-10-02 12:26:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:26:51 np0005466030 nova_compute[230518]: 2025-10-02 12:26:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:26:51 np0005466030 nova_compute[230518]: 2025-10-02 12:26:51.089 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:26:51 np0005466030 nova_compute[230518]: 2025-10-02 12:26:51.090 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:26:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:26:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:51.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:51.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:52 np0005466030 nova_compute[230518]: 2025-10-02 12:26:52.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:53.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:55 np0005466030 nova_compute[230518]: 2025-10-02 12:26:55.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:55.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:57 np0005466030 nova_compute[230518]: 2025-10-02 12:26:57.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:26:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:26:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:57.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.006805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018006912, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2003, "num_deletes": 255, "total_data_size": 4758920, "memory_usage": 4822664, "flush_reason": "Manual Compaction"}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018018051, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1899287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33642, "largest_seqno": 35640, "table_properties": {"data_size": 1892988, "index_size": 3245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16669, "raw_average_key_size": 21, "raw_value_size": 1879034, "raw_average_value_size": 2402, "num_data_blocks": 145, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407850, "oldest_key_time": 1759407850, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 11288 microseconds, and 5843 cpu microseconds.
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.018110) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1899287 bytes OK
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.018140) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020771) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020785) EVENT_LOG_v1 {"time_micros": 1759408018020780, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020810) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4749804, prev total WAL file size 4749804, number of live WAL files 2.
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.022544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1854KB)], [63(10225KB)]
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018022598, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 12370288, "oldest_snapshot_seqno": -1}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6020 keys, 9706784 bytes, temperature: kUnknown
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018105175, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9706784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9666387, "index_size": 24223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153506, "raw_average_key_size": 25, "raw_value_size": 9558199, "raw_average_value_size": 1587, "num_data_blocks": 978, "num_entries": 6020, "num_filter_entries": 6020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.105476) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9706784 bytes
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.106928) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.6 rd, 117.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 6469, records dropped: 449 output_compression: NoCompression
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.106958) EVENT_LOG_v1 {"time_micros": 1759408018106945, "job": 38, "event": "compaction_finished", "compaction_time_micros": 82682, "compaction_time_cpu_micros": 46039, "output_level": 6, "num_output_files": 1, "total_output_size": 9706784, "num_input_records": 6469, "num_output_records": 6020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018107586, "job": 38, "event": "table_file_deletion", "file_number": 65}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018109995, "job": 38, "event": "table_file_deletion", "file_number": 63}
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.022397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:26:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:26:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:26:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:26:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:26:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:59.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:00 np0005466030 nova_compute[230518]: 2025-10-02 12:27:00.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:01.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:02 np0005466030 nova_compute[230518]: 2025-10-02 12:27:02.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:04.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:27:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:27:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:27:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:27:05 np0005466030 nova_compute[230518]: 2025-10-02 12:27:05.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:05.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:06.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:07 np0005466030 nova_compute[230518]: 2025-10-02 12:27:07.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:08.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:09 np0005466030 podman[254868]: 2025-10-02 12:27:09.84147917 +0000 UTC m=+0.084912893 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:27:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:09.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:10.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:10 np0005466030 nova_compute[230518]: 2025-10-02 12:27:10.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:10 np0005466030 podman[254887]: 2025-10-02 12:27:10.878445529 +0000 UTC m=+0.118697975 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct  2 08:27:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:12.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:12 np0005466030 nova_compute[230518]: 2025-10-02 12:27:12.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:13.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:14.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 08:27:14 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  2 08:27:15 np0005466030 nova_compute[230518]: 2025-10-02 12:27:15.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:15 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  2 08:27:15 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  2 08:27:15 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  2 08:27:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:16.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:17 np0005466030 nova_compute[230518]: 2025-10-02 12:27:17.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:18.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.403 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.404 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.421 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.524 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.525 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.533 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.534 2 INFO nova.compute.claims [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:27:18 np0005466030 nova_compute[230518]: 2025-10-02 12:27:18.630 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3727851687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.094 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.103 2 DEBUG nova.compute.provider_tree [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.122 2 DEBUG nova.scheduler.client.report [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.144 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.145 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.193 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.194 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.219 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.237 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.325 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.328 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.329 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Creating image(s)#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.371 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.414 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.461 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.466 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.507 2 DEBUG nova.policy [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b6687fbfb1f484ba99ef93bbb4ffa7e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c20ce9073494490588607984318667f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.547 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.549 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.551 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.552 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.607 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.611 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 78888fa8-1051-485d-9e51-feaec2648c8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:19 np0005466030 podman[255030]: 2025-10-02 12:27:19.818882118 +0000 UTC m=+0.067650290 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:27:19 np0005466030 podman[255029]: 2025-10-02 12:27:19.823151653 +0000 UTC m=+0.065481092 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.895 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 78888fa8-1051-485d-9e51-feaec2648c8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:19 np0005466030 nova_compute[230518]: 2025-10-02 12:27:19.966 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] resizing rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:27:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:20 np0005466030 nova_compute[230518]: 2025-10-02 12:27:20.069 2 DEBUG nova.objects.instance [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:20 np0005466030 nova_compute[230518]: 2025-10-02 12:27:20.082 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:27:20 np0005466030 nova_compute[230518]: 2025-10-02 12:27:20.083 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Ensure instance console log exists: /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:27:20 np0005466030 nova_compute[230518]: 2025-10-02 12:27:20.083 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:20 np0005466030 nova_compute[230518]: 2025-10-02 12:27:20.084 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:20 np0005466030 nova_compute[230518]: 2025-10-02 12:27:20.084 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:20 np0005466030 nova_compute[230518]: 2025-10-02 12:27:20.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:21 np0005466030 nova_compute[230518]: 2025-10-02 12:27:21.500 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Successfully created port: 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:27:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:21.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:22 np0005466030 nova_compute[230518]: 2025-10-02 12:27:22.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:23.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:24.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.323 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Successfully updated port: 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.341 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.341 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.341 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.465 2 DEBUG nova.compute.manager [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.465 2 DEBUG nova.compute.manager [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing instance network info cache due to event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.465 2 DEBUG oslo_concurrency.lockutils [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.510 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.677 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.677 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.697 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.777 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.777 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.784 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.784 2 INFO nova.compute.claims [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:27:24 np0005466030 nova_compute[230518]: 2025-10-02 12:27:24.920 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1124206307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.344 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.349 2 DEBUG nova.compute.provider_tree [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.369 2 DEBUG nova.scheduler.client.report [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.394 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.394 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.434 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.434 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.457 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:27:25 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.473 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.669 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.670 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.670 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Creating image(s)#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.694 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.727 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.755 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.759 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.785 2 DEBUG nova.policy [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eff0431e92464c78b780c8365e6e920c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.821 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.822 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.823 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.823 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.851 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:25 np0005466030 nova_compute[230518]: 2025-10-02 12:27:25.854 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 be855518-90af-4fa9-b969-4a1579934010_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:27:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:25.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:27:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:26.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.410 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.431 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.431 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance network_info: |[{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.432 2 DEBUG oslo_concurrency.lockutils [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.432 2 DEBUG nova.network.neutron [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.435 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start _get_guest_xml network_info=[{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.439 2 WARNING nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.444 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.444 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.452 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.452 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.454 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.454 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.457 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.457 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.460 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.854 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 be855518-90af-4fa9-b969-4a1579934010_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.000s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2610359259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:26 np0005466030 nova_compute[230518]: 2025-10-02 12:27:26.929 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] resizing rbd image be855518-90af-4fa9-b969-4a1579934010_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.158 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.182 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.185 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.310 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Successfully created port: 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3572455864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.603 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.605 2 DEBUG nova.virt.libvirt.vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-232208417',display_name='tempest-FloatingIPsAssociationTestJSON-server-232208417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-232208417',id=59,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xvp3u8r5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:19Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=78888fa8-1051-485d-9e51-feaec2648c8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.606 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.607 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.609 2 DEBUG nova.objects.instance [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.642 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <uuid>78888fa8-1051-485d-9e51-feaec2648c8f</uuid>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <name>instance-0000003b</name>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-232208417</nova:name>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:27:26</nova:creationTime>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:user uuid="2b6687fbfb1f484ba99ef93bbb4ffa7e">tempest-FloatingIPsAssociationTestJSON-875651151-project-member</nova:user>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:project uuid="c20ce9073494490588607984318667f5">tempest-FloatingIPsAssociationTestJSON-875651151</nova:project>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <nova:port uuid="4b915a8b-b3f4-47fe-bb5a-1e712c3d182e">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <entry name="serial">78888fa8-1051-485d-9e51-feaec2648c8f</entry>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <entry name="uuid">78888fa8-1051-485d-9e51-feaec2648c8f</entry>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/78888fa8-1051-485d-9e51-feaec2648c8f_disk">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/78888fa8-1051-485d-9e51-feaec2648c8f_disk.config">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:db:de:41"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <target dev="tap4b915a8b-b3"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/console.log" append="off"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:27:27 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:27:27 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:27:27 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:27:27 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.644 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Preparing to wait for external event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.644 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.644 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.645 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.645 2 DEBUG nova.virt.libvirt.vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-232208417',display_name='tempest-FloatingIPsAssociationTestJSON-server-232208417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-232208417',id=59,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xvp3u8r5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:19Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=78888fa8-1051-485d-9e51-feaec2648c8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.646 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.646 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.646 2 DEBUG os_vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b915a8b-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b915a8b-b3, col_values=(('external_ids', {'iface-id': '4b915a8b-b3f4-47fe-bb5a-1e712c3d182e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:de:41', 'vm-uuid': '78888fa8-1051-485d-9e51-feaec2648c8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:27 np0005466030 NetworkManager[44960]: <info>  [1759408047.6530] manager: (tap4b915a8b-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.658 2 INFO os_vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3')#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.704 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.705 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.705 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No VIF found with MAC fa:16:3e:db:de:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.705 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Using config drive#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.730 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.865 2 DEBUG nova.objects.instance [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'migration_context' on Instance uuid be855518-90af-4fa9-b969-4a1579934010 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Ensure instance console log exists: /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:27 np0005466030 nova_compute[230518]: 2025-10-02 12:27:27.879 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:28.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:28.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.629 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Creating config drive at /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.633 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9z6yd2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.668 2 DEBUG nova.network.neutron [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated VIF entry in instance network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.669 2 DEBUG nova.network.neutron [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.694 2 DEBUG oslo_concurrency.lockutils [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.697 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Successfully updated port: 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.737 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.737 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquired lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.737 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.764 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9z6yd2h" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.793 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.797 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.978 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:28 np0005466030 nova_compute[230518]: 2025-10-02 12:27:28.979 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deleting local config drive /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config because it was imported into RBD.#033[00m
Oct  2 08:27:29 np0005466030 kernel: tap4b915a8b-b3: entered promiscuous mode
Oct  2 08:27:29 np0005466030 NetworkManager[44960]: <info>  [1759408049.0242] manager: (tap4b915a8b-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:29Z|00264|binding|INFO|Claiming lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for this chassis.
Oct  2 08:27:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:29Z|00265|binding|INFO|4b915a8b-b3f4-47fe-bb5a-1e712c3d182e: Claiming fa:16:3e:db:de:41 10.100.0.8
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466030 systemd-udevd[255462]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:27:29 np0005466030 systemd-machined[188247]: New machine qemu-30-instance-0000003b.
Oct  2 08:27:29 np0005466030 NetworkManager[44960]: <info>  [1759408049.0620] device (tap4b915a8b-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:27:29 np0005466030 NetworkManager[44960]: <info>  [1759408049.0635] device (tap4b915a8b-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:27:29 np0005466030 systemd[1]: Started Virtual Machine qemu-30-instance-0000003b.
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:29Z|00266|binding|INFO|Setting lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e ovn-installed in OVS
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.308 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:27:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:29Z|00267|binding|INFO|Setting lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e up in Southbound
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.571 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:de:41 10.100.0.8'], port_security=['fa:16:3e:db:de:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '78888fa8-1051-485d-9e51-feaec2648c8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20ce9073494490588607984318667f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '389e89f9-e63f-4b21-acd4-c43535d1012a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4006d2de-ae16-44cd-90c1-7beb9f87e38f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.572 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e in datapath a920a635-8cc8-4478-b2dc-4d6329a5abef bound to our chassis#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.573 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a920a635-8cc8-4478-b2dc-4d6329a5abef#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ef9d29-71eb-4fd4-b4c4-223a02e418af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.587 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa920a635-81 in ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.588 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa920a635-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48beee8f-0d9a-4693-b3fa-795d492f1807]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e26e011-2597-4fd0-8c0f-07db050d0f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.603 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd0ab09-8911-4f40-8ddc-e81c5b78f550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.627 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[296e1898-7091-4cc8-991d-c06a7e1a95b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.658 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fd8374-aec9-4bfb-b74f-abd0f98cbfb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 NetworkManager[44960]: <info>  [1759408049.6652] manager: (tapa920a635-80): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.665 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[401acaf9-15f2-4bce-8af2-0eb5788fafea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.695 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f133e3-eeea-47ef-8c6d-67a22ca74138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.699 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7adf9756-62e5-4033-9934-a95819dc73fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 NetworkManager[44960]: <info>  [1759408049.7225] device (tapa920a635-80): carrier: link connected
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.730 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[49305452-bdbd-4d83-bf84-3ac00ad08c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.745 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[000ec547-9859-468a-8608-724c1d879a0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa920a635-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:4d:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591328, 'reachable_time': 35958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255540, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.757 2 DEBUG nova.compute.manager [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-changed-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.758 2 DEBUG nova.compute.manager [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Refreshing instance network info cache due to event network-changed-5404e3f9-be33-4f3a-b227-2fa3944c6bb7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.758 2 DEBUG oslo_concurrency.lockutils [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.763 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d1ae8f-f033-4cc9-8095-a8dbe641aab1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:4d8c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591328, 'tstamp': 591328}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255541, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c604429b-1eb2-4af6-b42f-de7996af5a4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa920a635-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:4d:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591328, 'reachable_time': 35958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255542, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.820 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2bae1b18-bba2-4dc1-8720-7f898dbf6739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.866 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408049.8659763, 78888fa8-1051-485d-9e51-feaec2648c8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.867 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.892 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fe59cd6f-c522-43e3-b708-c441874cbcd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.893 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920a635-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.893 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.894 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa920a635-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466030 kernel: tapa920a635-80: entered promiscuous mode
Oct  2 08:27:29 np0005466030 NetworkManager[44960]: <info>  [1759408049.8975] manager: (tapa920a635-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.901 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa920a635-80, col_values=(('external_ids', {'iface-id': '7f4ed3f3-7aae-4781-9951-b5f99eb58474'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:29Z|00268|binding|INFO|Releasing lport 7f4ed3f3-7aae-4781-9951-b5f99eb58474 from this chassis (sb_readonly=0)
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.906 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.907 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a65f5f93-6845-480a-a6bb-807aafb57e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.908 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:27:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.908 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'env', 'PROCESS_TAG=haproxy-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a920a635-8cc8-4478-b2dc-4d6329a5abef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.945 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.949 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408049.8662252, 78888fa8-1051-485d-9e51-feaec2648c8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:29 np0005466030 nova_compute[230518]: 2025-10-02 12:27:29.949 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:27:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:30.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.039 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:30.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.043 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.182 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:30 np0005466030 podman[255574]: 2025-10-02 12:27:30.298698377 +0000 UTC m=+0.062595941 container create 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  2 08:27:30 np0005466030 systemd[1]: Started libpod-conmon-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659.scope.
Oct  2 08:27:30 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:27:30 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8348c266641d7f0169a9362cbaa2a00eb443af6e6b20ca0bc7f72d3ff7a284ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:27:30 np0005466030 podman[255574]: 2025-10-02 12:27:30.27305544 +0000 UTC m=+0.036953024 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:27:30 np0005466030 podman[255574]: 2025-10-02 12:27:30.366191591 +0000 UTC m=+0.130089155 container init 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:27:30 np0005466030 podman[255574]: 2025-10-02 12:27:30.372162849 +0000 UTC m=+0.136060413 container start 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:27:30 np0005466030 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : New worker (255595) forked
Oct  2 08:27:30 np0005466030 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : Loading success.
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.608 2 DEBUG nova.compute.manager [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.608 2 DEBUG oslo_concurrency.lockutils [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.608 2 DEBUG oslo_concurrency.lockutils [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.609 2 DEBUG oslo_concurrency.lockutils [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.609 2 DEBUG nova.compute.manager [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Processing event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.609 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.615 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408050.6156862, 78888fa8-1051-485d-9e51-feaec2648c8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.617 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.619 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.623 2 INFO nova.virt.libvirt.driver [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance spawned successfully.#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.624 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.876 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.882 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.886 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.887 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.887 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.888 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.888 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.889 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.908 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Updating instance_info_cache with network_info: [{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:30 np0005466030 nova_compute[230518]: 2025-10-02 12:27:30.939 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.061 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Releasing lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.061 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance network_info: |[{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.062 2 DEBUG oslo_concurrency.lockutils [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.062 2 DEBUG nova.network.neutron [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Refreshing network info cache for port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.064 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start _get_guest_xml network_info=[{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.068 2 WARNING nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.075 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.075 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.078 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.079 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.079 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.082 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.082 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.082 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.084 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.114 2 INFO nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 11.79 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.115 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.176 2 INFO nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 12.70 seconds to build instance.#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.201 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1910363574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.540 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.567 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:31 np0005466030 nova_compute[230518]: 2025-10-02 12:27:31.571 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:27:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3686929686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.003 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:32.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.007 2 DEBUG nova.virt.libvirt.vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1210201217',id=60,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-yyo20tgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:25Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=be855518-90af-4fa9-b969-4a1579934010,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.009 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.010 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.013 2 DEBUG nova.objects.instance [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid be855518-90af-4fa9-b969-4a1579934010 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.042 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <uuid>be855518-90af-4fa9-b969-4a1579934010</uuid>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <name>instance-0000003c</name>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1210201217</nova:name>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:27:31</nova:creationTime>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:user uuid="eff0431e92464c78b780c8365e6e920c">tempest-ImagesOneServerNegativeTestJSON-883313902-project-member</nova:user>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:project uuid="bfd7bec5bd4b4366a96cc55cfe95fcc9">tempest-ImagesOneServerNegativeTestJSON-883313902</nova:project>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <nova:port uuid="5404e3f9-be33-4f3a-b227-2fa3944c6bb7">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <entry name="serial">be855518-90af-4fa9-b969-4a1579934010</entry>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <entry name="uuid">be855518-90af-4fa9-b969-4a1579934010</entry>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/be855518-90af-4fa9-b969-4a1579934010_disk">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/be855518-90af-4fa9-b969-4a1579934010_disk.config">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:d4:5c:32"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <target dev="tap5404e3f9-be"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/console.log" append="off"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:32.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:27:32 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:27:32 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:27:32 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:27:32 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.054 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Preparing to wait for external event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.054 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.054 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.055 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.056 2 DEBUG nova.virt.libvirt.vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1210201217',id=60,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-yyo20tgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:25Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=be855518-90af-4fa9-b969-4a1579934010,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.056 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.057 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.058 2 DEBUG os_vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5404e3f9-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5404e3f9-be, col_values=(('external_ids', {'iface-id': '5404e3f9-be33-4f3a-b227-2fa3944c6bb7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:5c:32', 'vm-uuid': 'be855518-90af-4fa9-b969-4a1579934010'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:32 np0005466030 NetworkManager[44960]: <info>  [1759408052.0718] manager: (tap5404e3f9-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.079 2 INFO os_vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be')#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.126 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.127 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.127 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No VIF found with MAC fa:16:3e:d4:5c:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.127 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Using config drive#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.148 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.460 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Creating config drive at /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.467 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphkcla7hf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.602 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphkcla7hf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.640 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.644 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config be855518-90af-4fa9-b969-4a1579934010_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.741 2 DEBUG nova.network.neutron [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Updated VIF entry in instance network info cache for port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.742 2 DEBUG nova.network.neutron [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Updating instance_info_cache with network_info: [{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.761 2 DEBUG nova.compute.manager [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.761 2 DEBUG oslo_concurrency.lockutils [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.762 2 DEBUG oslo_concurrency.lockutils [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.763 2 DEBUG oslo_concurrency.lockutils [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.763 2 DEBUG nova.compute.manager [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] No waiting events found dispatching network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.763 2 WARNING nova.compute.manager [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received unexpected event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for instance with vm_state active and task_state None.#033[00m
Oct  2 08:27:32 np0005466030 nova_compute[230518]: 2025-10-02 12:27:32.779 2 DEBUG oslo_concurrency.lockutils [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:34.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:34.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.008 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config be855518-90af-4fa9-b969-4a1579934010_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.009 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Deleting local config drive /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config because it was imported into RBD.#033[00m
Oct  2 08:27:35 np0005466030 kernel: tap5404e3f9-be: entered promiscuous mode
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.0604] manager: (tap5404e3f9-be): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:35Z|00269|binding|INFO|Claiming lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for this chassis.
Oct  2 08:27:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:35Z|00270|binding|INFO|5404e3f9-be33-4f3a-b227-2fa3944c6bb7: Claiming fa:16:3e:d4:5c:32 10.100.0.14
Oct  2 08:27:35 np0005466030 systemd-udevd[255739]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:27:35 np0005466030 systemd-machined[188247]: New machine qemu-31-instance-0000003c.
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.1094] device (tap5404e3f9-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.1114] device (tap5404e3f9-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:27:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:35Z|00271|binding|INFO|Setting lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 ovn-installed in OVS
Oct  2 08:27:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:35Z|00272|binding|INFO|Setting lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 up in Southbound
Oct  2 08:27:35 np0005466030 systemd[1]: Started Virtual Machine qemu-31-instance-0000003c.
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.169 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:5c:32 10.100.0.14'], port_security=['fa:16:3e:d4:5c:32 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be855518-90af-4fa9-b969-4a1579934010', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5404e3f9-be33-4f3a-b227-2fa3944c6bb7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.170 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 bound to our chassis#033[00m
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.177 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.187 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1e6116-c36d-4527-97ee-49ffefa60704]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.189 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeefd67eb-b1 in ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.192 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeefd67eb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.192 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb5126a-502f-4827-b064-ea658a35561a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.193 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5831871-d311-4128-864b-8508485975a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.206 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3116ac65-7499-4296-961e-b0a4608663dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8c74516a-078e-4d62-a2d9-144ce28c468a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.258 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8e254b00-e91c-45b9-b4bd-01ffecf49a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.266 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[82e0bc9c-f298-4b9d-8153-68919186247c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.2672] manager: (tapeefd67eb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.301 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2c76b17f-184e-4bc5-954a-4f9682a3c2a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.304 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3831ecc8-c925-48db-aa97-fc5bdbbbe348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.3303] device (tapeefd67eb-b0): carrier: link connected
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.340 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c0672a7b-2e2e-4888-b1ef-83d9f4de3f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.358 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26e6ebb7-6197-4262-a01a-0057e67faf36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591888, 'reachable_time': 28229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255775, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.376 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba922f8-465f-4729-b306-d8a77a9a0aed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:db93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591888, 'tstamp': 591888}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255776, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.393 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1df4dc27-a18a-447d-8f20-a1b8fb131a32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591888, 'reachable_time': 28229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255777, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.428 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e257a9c-d388-41c8-9692-d570ac315d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.490 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bd6c7d-599d-4267-a890-fa144cd82573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.491 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.492 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.492 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeefd67eb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.4953] manager: (tapeefd67eb-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct  2 08:27:35 np0005466030 kernel: tapeefd67eb-b0: entered promiscuous mode
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.498 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeefd67eb-b0, col_values=(('external_ids', {'iface-id': '4a1c64ee-2e43-4924-ad64-0ba8b656d152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:35Z|00273|binding|INFO|Releasing lport 4a1c64ee-2e43-4924-ad64-0ba8b656d152 from this chassis (sb_readonly=0)
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.501 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.512 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c73e274a-abce-4830-a000-268984d884f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.514 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:27:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.516 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'env', 'PROCESS_TAG=haproxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:27:35 np0005466030 podman[255843]: 2025-10-02 12:27:35.891452478 +0000 UTC m=+0.055643412 container create 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.9152] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct  2 08:27:35 np0005466030 nova_compute[230518]: 2025-10-02 12:27:35.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:35 np0005466030 NetworkManager[44960]: <info>  [1759408055.9160] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct  2 08:27:35 np0005466030 podman[255843]: 2025-10-02 12:27:35.868389153 +0000 UTC m=+0.032580107 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:27:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:27:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:36.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:27:36 np0005466030 systemd[1]: Started libpod-conmon-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd.scope.
Oct  2 08:27:36 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:27:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464fc771c237ea3199774af778ab94d5f0cccedca549bc0b6bbf75459dd17206/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:36Z|00274|binding|INFO|Releasing lport 7f4ed3f3-7aae-4781-9951-b5f99eb58474 from this chassis (sb_readonly=0)
Oct  2 08:27:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:36Z|00275|binding|INFO|Releasing lport 4a1c64ee-2e43-4924-ad64-0ba8b656d152 from this chassis (sb_readonly=0)
Oct  2 08:27:36 np0005466030 podman[255843]: 2025-10-02 12:27:36.091815323 +0000 UTC m=+0.256006307 container init 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:27:36 np0005466030 podman[255843]: 2025-10-02 12:27:36.098594846 +0000 UTC m=+0.262785800 container start 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:36 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : New worker (255870) forked
Oct  2 08:27:36 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : Loading success.
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.226 2 DEBUG nova.compute.manager [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.226 2 DEBUG oslo_concurrency.lockutils [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.227 2 DEBUG oslo_concurrency.lockutils [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.227 2 DEBUG oslo_concurrency.lockutils [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.227 2 DEBUG nova.compute.manager [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Processing event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.555 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408056.5545697, be855518-90af-4fa9-b969-4a1579934010 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.555 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Started (Lifecycle Event)#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.557 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.561 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.564 2 INFO nova.virt.libvirt.driver [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance spawned successfully.#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.565 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.587 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.591 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.623 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.624 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.624 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.625 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.625 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.626 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.635 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.635 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408056.554747, be855518-90af-4fa9-b969-4a1579934010 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.635 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.715 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.720 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408056.5611575, be855518-90af-4fa9-b969-4a1579934010 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.720 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.790 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.793 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.806 2 INFO nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 11.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.806 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.831 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:27:36 np0005466030 nova_compute[230518]: 2025-10-02 12:27:36.946 2 INFO nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 12.20 seconds to build instance.#033[00m
Oct  2 08:27:37 np0005466030 nova_compute[230518]: 2025-10-02 12:27:37.015 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:37 np0005466030 nova_compute[230518]: 2025-10-02 12:27:37.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:37 np0005466030 nova_compute[230518]: 2025-10-02 12:27:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:38.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:38 np0005466030 nova_compute[230518]: 2025-10-02 12:27:38.388 2 DEBUG nova.compute.manager [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:38 np0005466030 nova_compute[230518]: 2025-10-02 12:27:38.388 2 DEBUG oslo_concurrency.lockutils [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:38 np0005466030 nova_compute[230518]: 2025-10-02 12:27:38.389 2 DEBUG oslo_concurrency.lockutils [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:38 np0005466030 nova_compute[230518]: 2025-10-02 12:27:38.389 2 DEBUG oslo_concurrency.lockutils [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:38 np0005466030 nova_compute[230518]: 2025-10-02 12:27:38.389 2 DEBUG nova.compute.manager [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] No waiting events found dispatching network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:38 np0005466030 nova_compute[230518]: 2025-10-02 12:27:38.389 2 WARNING nova.compute.manager [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received unexpected event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.642632) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059642694, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 645, "num_deletes": 251, "total_data_size": 1016623, "memory_usage": 1028952, "flush_reason": "Manual Compaction"}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059676786, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 670427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35645, "largest_seqno": 36285, "table_properties": {"data_size": 667281, "index_size": 1054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7438, "raw_average_key_size": 19, "raw_value_size": 661007, "raw_average_value_size": 1699, "num_data_blocks": 47, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408019, "oldest_key_time": 1759408019, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 34209 microseconds, and 2562 cpu microseconds.
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.676850) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 670427 bytes OK
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.676867) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683638) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683692) EVENT_LOG_v1 {"time_micros": 1759408059683682, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683717) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1013057, prev total WAL file size 1028564, number of live WAL files 2.
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.684440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(654KB)], [66(9479KB)]
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059684485, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10377211, "oldest_snapshot_seqno": -1}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 5898 keys, 8522087 bytes, temperature: kUnknown
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059835571, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8522087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8483569, "index_size": 22664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 151666, "raw_average_key_size": 25, "raw_value_size": 8378468, "raw_average_value_size": 1420, "num_data_blocks": 906, "num_entries": 5898, "num_filter_entries": 5898, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.835934) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8522087 bytes
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.945438) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.6 rd, 56.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.3 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(28.2) write-amplify(12.7) OK, records in: 6409, records dropped: 511 output_compression: NoCompression
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.945488) EVENT_LOG_v1 {"time_micros": 1759408059945462, "job": 40, "event": "compaction_finished", "compaction_time_micros": 151187, "compaction_time_cpu_micros": 19680, "output_level": 6, "num_output_files": 1, "total_output_size": 8522087, "num_input_records": 6409, "num_output_records": 5898, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059945985, "job": 40, "event": "table_file_deletion", "file_number": 68}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059947612, "job": 40, "event": "table_file_deletion", "file_number": 66}
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.684342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:39 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:27:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:27:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:27:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:40.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:40 np0005466030 podman[255880]: 2025-10-02 12:27:40.826719751 +0000 UTC m=+0.075935300 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:27:41 np0005466030 podman[255898]: 2025-10-02 12:27:41.82618228 +0000 UTC m=+0.083804018 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 08:27:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:42.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3804469328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.712 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.931 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.931 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.936 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:42 np0005466030 nova_compute[230518]: 2025-10-02 12:27:42.936 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.173 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.175 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4255MB free_disk=20.880069732666016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.175 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.176 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.366 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 78888fa8-1051-485d-9e51-feaec2648c8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.367 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance be855518-90af-4fa9-b969-4a1579934010 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.368 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.369 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.439 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:27:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:27:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2161945545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.911 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:27:43 np0005466030 nova_compute[230518]: 2025-10-02 12:27:43.917 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:27:44 np0005466030 nova_compute[230518]: 2025-10-02 12:27:44.006 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:27:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:44.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:44.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:44 np0005466030 nova_compute[230518]: 2025-10-02 12:27:44.239 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:27:44 np0005466030 nova_compute[230518]: 2025-10-02 12:27:44.239 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.234 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.235 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.236 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.821 2 DEBUG nova.compute.manager [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.823 2 DEBUG nova.compute.manager [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing instance network info cache due to event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.823 2 DEBUG oslo_concurrency.lockutils [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.824 2 DEBUG oslo_concurrency.lockutils [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:45 np0005466030 nova_compute[230518]: 2025-10-02 12:27:45.825 2 DEBUG nova.network.neutron [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:27:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:46.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:47 np0005466030 nova_compute[230518]: 2025-10-02 12:27:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:47 np0005466030 nova_compute[230518]: 2025-10-02 12:27:47.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466030 nova_compute[230518]: 2025-10-02 12:27:47.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:47.628 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:47 np0005466030 nova_compute[230518]: 2025-10-02 12:27:47.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:47.629 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:27:47 np0005466030 nova_compute[230518]: 2025-10-02 12:27:47.819 2 DEBUG nova.network.neutron [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated VIF entry in instance network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:27:47 np0005466030 nova_compute[230518]: 2025-10-02 12:27:47.820 2 DEBUG nova.network.neutron [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:47Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:de:41 10.100.0.8
Oct  2 08:27:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:47Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:de:41 10.100.0.8
Oct  2 08:27:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:48.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:48 np0005466030 nova_compute[230518]: 2025-10-02 12:27:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:48 np0005466030 nova_compute[230518]: 2025-10-02 12:27:48.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:27:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:49 np0005466030 nova_compute[230518]: 2025-10-02 12:27:49.086 2 DEBUG oslo_concurrency.lockutils [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:50.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:50 np0005466030 nova_compute[230518]: 2025-10-02 12:27:50.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:50.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:50 np0005466030 nova_compute[230518]: 2025-10-02 12:27:50.475 2 DEBUG nova.compute.manager [None req-f41481db-a81c-4e3a-b383-0b066ac56f0b eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:27:50 np0005466030 nova_compute[230518]: 2025-10-02 12:27:50.583 2 INFO nova.compute.manager [None req-f41481db-a81c-4e3a-b383-0b066ac56f0b eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] instance snapshotting#033[00m
Oct  2 08:27:50 np0005466030 podman[255971]: 2025-10-02 12:27:50.866206143 +0000 UTC m=+0.107817284 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:27:50 np0005466030 podman[255970]: 2025-10-02 12:27:50.872718298 +0000 UTC m=+0.118837231 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:27:51 np0005466030 nova_compute[230518]: 2025-10-02 12:27:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:51 np0005466030 nova_compute[230518]: 2025-10-02 12:27:51.382 2 WARNING nova.compute.manager [None req-f41481db-a81c-4e3a-b383-0b066ac56f0b eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Image not found during snapshot: nova.exception.ImageNotFound: Image a1e5194e-ee54-41db-883e-1f37efee5068 could not be found.#033[00m
Oct  2 08:27:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:52.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:27:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:27:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:27:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:27:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.588 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.588 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.589 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:27:52 np0005466030 nova_compute[230518]: 2025-10-02 12:27:52.589 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:27:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:27:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:27:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:27:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:27:53 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:53Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:5c:32 10.100.0.14
Oct  2 08:27:53 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:53Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:5c:32 10.100.0.14
Oct  2 08:27:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:54.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:54 np0005466030 nova_compute[230518]: 2025-10-02 12:27:54.765 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:54 np0005466030 nova_compute[230518]: 2025-10-02 12:27:54.765 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:54 np0005466030 nova_compute[230518]: 2025-10-02 12:27:54.766 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:54 np0005466030 nova_compute[230518]: 2025-10-02 12:27:54.766 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:54 np0005466030 nova_compute[230518]: 2025-10-02 12:27:54.766 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:54 np0005466030 nova_compute[230518]: 2025-10-02 12:27:54.767 2 INFO nova.compute.manager [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Terminating instance#033[00m
Oct  2 08:27:54 np0005466030 nova_compute[230518]: 2025-10-02 12:27:54.768 2 DEBUG nova.compute.manager [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:27:55 np0005466030 kernel: tap5404e3f9-be (unregistering): left promiscuous mode
Oct  2 08:27:55 np0005466030 NetworkManager[44960]: <info>  [1759408075.3162] device (tap5404e3f9-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:55Z|00276|binding|INFO|Releasing lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 from this chassis (sb_readonly=0)
Oct  2 08:27:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:55Z|00277|binding|INFO|Setting lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 down in Southbound
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:55Z|00278|binding|INFO|Removing iface tap5404e3f9-be ovn-installed in OVS
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct  2 08:27:55 np0005466030 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000003c.scope: Consumed 15.027s CPU time.
Oct  2 08:27:55 np0005466030 systemd-machined[188247]: Machine qemu-31-instance-0000003c terminated.
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.426 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:5c:32 10.100.0.14'], port_security=['fa:16:3e:d4:5c:32 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be855518-90af-4fa9-b969-4a1579934010', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5404e3f9-be33-4f3a-b227-2fa3944c6bb7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.427 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 unbound from our chassis#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.429 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.430 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64a29388-2582-403e-9208-b064603b7620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.431 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace which is not needed anymore#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.613 2 INFO nova.virt.libvirt.driver [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance destroyed successfully.#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.613 2 DEBUG nova.objects.instance [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'resources' on Instance uuid be855518-90af-4fa9-b969-4a1579934010 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:55 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : haproxy version is 2.8.14-c23fe91
Oct  2 08:27:55 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : path to executable is /usr/sbin/haproxy
Oct  2 08:27:55 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [WARNING]  (255867) : Exiting Master process...
Oct  2 08:27:55 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [ALERT]    (255867) : Current worker (255870) exited with code 143 (Terminated)
Oct  2 08:27:55 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [WARNING]  (255867) : All workers exited. Exiting... (0)
Oct  2 08:27:55 np0005466030 systemd[1]: libpod-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd.scope: Deactivated successfully.
Oct  2 08:27:55 np0005466030 podman[256164]: 2025-10-02 12:27:55.643141294 +0000 UTC m=+0.074488765 container died 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:27:55 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd-userdata-shm.mount: Deactivated successfully.
Oct  2 08:27:55 np0005466030 systemd[1]: var-lib-containers-storage-overlay-464fc771c237ea3199774af778ab94d5f0cccedca549bc0b6bbf75459dd17206-merged.mount: Deactivated successfully.
Oct  2 08:27:55 np0005466030 podman[256164]: 2025-10-02 12:27:55.685527938 +0000 UTC m=+0.116875419 container cleanup 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.685 2 DEBUG nova.compute.manager [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.685 2 DEBUG nova.compute.manager [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing instance network info cache due to event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.686 2 DEBUG oslo_concurrency.lockutils [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:27:55 np0005466030 systemd[1]: libpod-conmon-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd.scope: Deactivated successfully.
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.757 2 DEBUG nova.virt.libvirt.vif [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1210201217',id=60,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-yyo20tgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:51Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=be855518-90af-4fa9-b969-4a1579934010,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.757 2 DEBUG nova.network.os_vif_util [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.758 2 DEBUG nova.network.os_vif_util [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.758 2 DEBUG os_vif [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5404e3f9-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.768 2 INFO os_vif [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be')#033[00m
Oct  2 08:27:55 np0005466030 podman[256205]: 2025-10-02 12:27:55.769648895 +0000 UTC m=+0.055551709 container remove 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.781 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64bcc710-d2b5-4eb9-a3ee-773e54d84c61]: (4, ('Thu Oct  2 12:27:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd)\n3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd\nThu Oct  2 12:27:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd)\n3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.783 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[498325bd-c73a-4c7f-ba05-5e048006c11a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.784 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:55 np0005466030 kernel: tapeefd67eb-b0: left promiscuous mode
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 nova_compute[230518]: 2025-10-02 12:27:55.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.819 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[854409f4-85e6-480d-b282-c255b0ad9982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.860 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb630631-6837-4b50-9960-c5808482fad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.862 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27dec628-9002-4070-bfdd-353181e8444e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.886 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a662e4e-8d04-4fec-a98b-d014e443dcc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591881, 'reachable_time': 42569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256238, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.889 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:27:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.889 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9ccdd6-bd9a-431e-9f2a-8d87cd716cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:55 np0005466030 systemd[1]: run-netns-ovnmeta\x2deefd67eb\x2db4b6\x2d4162\x2dbbdd\x2d0cce7dbdb491.mount: Deactivated successfully.
Oct  2 08:27:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:56.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:56 np0005466030 nova_compute[230518]: 2025-10-02 12:27:56.375 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:56 np0005466030 nova_compute[230518]: 2025-10-02 12:27:56.740 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:56 np0005466030 nova_compute[230518]: 2025-10-02 12:27:56.740 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:27:56 np0005466030 nova_compute[230518]: 2025-10-02 12:27:56.741 2 DEBUG oslo_concurrency.lockutils [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:27:56 np0005466030 nova_compute[230518]: 2025-10-02 12:27:56.741 2 DEBUG nova.network.neutron [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:27:56 np0005466030 nova_compute[230518]: 2025-10-02 12:27:56.742 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:57.631 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.894 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.895 2 INFO nova.compute.manager [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Terminating instance#033[00m
Oct  2 08:27:57 np0005466030 nova_compute[230518]: 2025-10-02 12:27:57.895 2 DEBUG nova.compute.manager [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:27:58 np0005466030 kernel: tap4b915a8b-b3 (unregistering): left promiscuous mode
Oct  2 08:27:58 np0005466030 NetworkManager[44960]: <info>  [1759408078.0438] device (tap4b915a8b-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:27:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:27:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:58.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:58 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:58Z|00279|binding|INFO|Releasing lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e from this chassis (sb_readonly=0)
Oct  2 08:27:58 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:58Z|00280|binding|INFO|Setting lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e down in Southbound
Oct  2 08:27:58 np0005466030 ovn_controller[129257]: 2025-10-02T12:27:58Z|00281|binding|INFO|Removing iface tap4b915a8b-b3 ovn-installed in OVS
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.078 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:de:41 10.100.0.8'], port_security=['fa:16:3e:db:de:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '78888fa8-1051-485d-9e51-feaec2648c8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20ce9073494490588607984318667f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '389e89f9-e63f-4b21-acd4-c43535d1012a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4006d2de-ae16-44cd-90c1-7beb9f87e38f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:27:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.079 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e in datapath a920a635-8cc8-4478-b2dc-4d6329a5abef unbound from our chassis#033[00m
Oct  2 08:27:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.081 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a920a635-8cc8-4478-b2dc-4d6329a5abef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:27:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e3e0ca-4ab2-491f-ada0-75a4a545fae5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.082 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef namespace which is not needed anymore#033[00m
Oct  2 08:27:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:27:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:27:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:58.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:27:58 np0005466030 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct  2 08:27:58 np0005466030 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003b.scope: Consumed 13.098s CPU time.
Oct  2 08:27:58 np0005466030 systemd-machined[188247]: Machine qemu-30-instance-0000003b terminated.
Oct  2 08:27:58 np0005466030 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : haproxy version is 2.8.14-c23fe91
Oct  2 08:27:58 np0005466030 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : path to executable is /usr/sbin/haproxy
Oct  2 08:27:58 np0005466030 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [WARNING]  (255593) : Exiting Master process...
Oct  2 08:27:58 np0005466030 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [ALERT]    (255593) : Current worker (255595) exited with code 143 (Terminated)
Oct  2 08:27:58 np0005466030 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [WARNING]  (255593) : All workers exited. Exiting... (0)
Oct  2 08:27:58 np0005466030 systemd[1]: libpod-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659.scope: Deactivated successfully.
Oct  2 08:27:58 np0005466030 podman[256262]: 2025-10-02 12:27:58.263804896 +0000 UTC m=+0.098084348 container died 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:27:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:27:58 np0005466030 NetworkManager[44960]: <info>  [1759408078.3149] manager: (tap4b915a8b-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.333 2 INFO nova.virt.libvirt.driver [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance destroyed successfully.#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.335 2 DEBUG nova.objects.instance [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'resources' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.422 2 DEBUG nova.virt.libvirt.vif [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-232208417',display_name='tempest-FloatingIPsAssociationTestJSON-server-232208417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-232208417',id=59,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xvp3u8r5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:31Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=78888fa8-1051-485d-9e51-feaec2648c8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.423 2 DEBUG nova.network.os_vif_util [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.423 2 DEBUG nova.network.os_vif_util [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.423 2 DEBUG os_vif [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b915a8b-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.430 2 INFO os_vif [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3')#033[00m
Oct  2 08:27:58 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659-userdata-shm.mount: Deactivated successfully.
Oct  2 08:27:58 np0005466030 systemd[1]: var-lib-containers-storage-overlay-8348c266641d7f0169a9362cbaa2a00eb443af6e6b20ca0bc7f72d3ff7a284ad-merged.mount: Deactivated successfully.
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.587 2 DEBUG nova.compute.manager [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-unplugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.587 2 DEBUG oslo_concurrency.lockutils [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.588 2 DEBUG oslo_concurrency.lockutils [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.588 2 DEBUG oslo_concurrency.lockutils [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.589 2 DEBUG nova.compute.manager [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] No waiting events found dispatching network-vif-unplugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.589 2 DEBUG nova.compute.manager [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-unplugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:58 np0005466030 podman[256262]: 2025-10-02 12:27:58.653440846 +0000 UTC m=+0.487720288 container cleanup 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.717 2 DEBUG nova.network.neutron [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated VIF entry in instance network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.718 2 DEBUG nova.network.neutron [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.777 2 DEBUG oslo_concurrency.lockutils [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.892 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-unplugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.893 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.893 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.894 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.894 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] No waiting events found dispatching network-vif-unplugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.894 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-unplugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.895 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.895 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.896 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.896 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.896 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] No waiting events found dispatching network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:27:58 np0005466030 nova_compute[230518]: 2025-10-02 12:27:58.897 2 WARNING nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received unexpected event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:27:59 np0005466030 podman[256323]: 2025-10-02 12:27:59.044405978 +0000 UTC m=+0.352057588 container remove 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:27:59 np0005466030 systemd[1]: libpod-conmon-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659.scope: Deactivated successfully.
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.052 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c3de6a-e04d-4f83-b1a6-2a6de34f0a9c]: (4, ('Thu Oct  2 12:27:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef (15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659)\n15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659\nThu Oct  2 12:27:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef (15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659)\n15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.053 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d9fff6-f3e5-4525-a921-2429d219dac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.054 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920a635-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:27:59 np0005466030 nova_compute[230518]: 2025-10-02 12:27:59.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:59 np0005466030 kernel: tapa920a635-80: left promiscuous mode
Oct  2 08:27:59 np0005466030 nova_compute[230518]: 2025-10-02 12:27:59.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5183e10-4e9d-4955-a20d-2ff934fccb36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.103 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91576ab4-bcf0-4caf-81bd-7235ceaeb96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.105 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[67984301-f547-4da5-8342-721421d4cd8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b99738ef-8e78-4e7a-a31b-510290b3a182]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591321, 'reachable_time': 21112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256341, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:27:59 np0005466030 systemd[1]: run-netns-ovnmeta\x2da920a635\x2d8cc8\x2d4478\x2db2dc\x2d4d6329a5abef.mount: Deactivated successfully.
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.125 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:27:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.126 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c62ce0e-d1d3-4b3e-9e00-efc99a327786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:00.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.389 2 INFO nova.virt.libvirt.driver [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deleting instance files /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f_del#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.389 2 INFO nova.virt.libvirt.driver [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deletion of /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f_del complete#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.861 2 DEBUG nova.compute.manager [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.861 2 DEBUG oslo_concurrency.lockutils [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.862 2 DEBUG oslo_concurrency.lockutils [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.862 2 DEBUG oslo_concurrency.lockutils [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.863 2 DEBUG nova.compute.manager [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] No waiting events found dispatching network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.863 2 WARNING nova.compute.manager [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received unexpected event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.872 2 INFO nova.compute.manager [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 2.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.873 2 DEBUG oslo.service.loopingcall [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.873 2 DEBUG nova.compute.manager [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:28:00 np0005466030 nova_compute[230518]: 2025-10-02 12:28:00.873 2 DEBUG nova.network.neutron [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:28:01 np0005466030 nova_compute[230518]: 2025-10-02 12:28:01.047 2 INFO nova.virt.libvirt.driver [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Deleting instance files /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010_del#033[00m
Oct  2 08:28:01 np0005466030 nova_compute[230518]: 2025-10-02 12:28:01.048 2 INFO nova.virt.libvirt.driver [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Deletion of /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010_del complete#033[00m
Oct  2 08:28:01 np0005466030 nova_compute[230518]: 2025-10-02 12:28:01.316 2 INFO nova.compute.manager [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 6.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:28:01 np0005466030 nova_compute[230518]: 2025-10-02 12:28:01.317 2 DEBUG oslo.service.loopingcall [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:28:01 np0005466030 nova_compute[230518]: 2025-10-02 12:28:01.318 2 DEBUG nova.compute.manager [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:28:01 np0005466030 nova_compute[230518]: 2025-10-02 12:28:01.318 2 DEBUG nova.network.neutron [-] [instance: be855518-90af-4fa9-b969-4a1579934010] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:28:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:02.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:02 np0005466030 nova_compute[230518]: 2025-10-02 12:28:02.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:02 np0005466030 nova_compute[230518]: 2025-10-02 12:28:02.791 2 DEBUG nova.network.neutron [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:02 np0005466030 nova_compute[230518]: 2025-10-02 12:28:02.931 2 DEBUG nova.network.neutron [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:03 np0005466030 nova_compute[230518]: 2025-10-02 12:28:03.348 2 INFO nova.compute.manager [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 2.03 seconds to deallocate network for instance.#033[00m
Oct  2 08:28:03 np0005466030 nova_compute[230518]: 2025-10-02 12:28:03.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:03 np0005466030 nova_compute[230518]: 2025-10-02 12:28:03.527 2 INFO nova.compute.manager [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 2.65 seconds to deallocate network for instance.#033[00m
Oct  2 08:28:03 np0005466030 nova_compute[230518]: 2025-10-02 12:28:03.744 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:03 np0005466030 nova_compute[230518]: 2025-10-02 12:28:03.745 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:03 np0005466030 nova_compute[230518]: 2025-10-02 12:28:03.911 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:04.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:04 np0005466030 nova_compute[230518]: 2025-10-02 12:28:04.263 2 DEBUG oslo_concurrency.processutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:04 np0005466030 nova_compute[230518]: 2025-10-02 12:28:04.324 2 DEBUG nova.compute.manager [req-91310769-5507-40a7-a0a3-47ce84f51735 req-7e8e928c-93b3-4f88-a7e0-1736ef594878 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-deleted-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:04 np0005466030 nova_compute[230518]: 2025-10-02 12:28:04.325 2 DEBUG nova.compute.manager [req-91310769-5507-40a7-a0a3-47ce84f51735 req-7e8e928c-93b3-4f88-a7e0-1736ef594878 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-deleted-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/908879594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:04 np0005466030 nova_compute[230518]: 2025-10-02 12:28:04.794 2 DEBUG oslo_concurrency.processutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:04 np0005466030 nova_compute[230518]: 2025-10-02 12:28:04.800 2 DEBUG nova.compute.provider_tree [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:04 np0005466030 nova_compute[230518]: 2025-10-02 12:28:04.917 2 DEBUG nova.scheduler.client.report [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.065 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.068 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.299 2 INFO nova.scheduler.client.report [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Deleted allocations for instance be855518-90af-4fa9-b969-4a1579934010#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.323 2 DEBUG oslo_concurrency.processutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.740 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4026011726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.790 2 DEBUG oslo_concurrency.processutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.796 2 DEBUG nova.compute.provider_tree [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.841 2 DEBUG nova.scheduler.client.report [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.907 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:05 np0005466030 nova_compute[230518]: 2025-10-02 12:28:05.940 2 INFO nova.scheduler.client.report [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Deleted allocations for instance 78888fa8-1051-485d-9e51-feaec2648c8f#033[00m
Oct  2 08:28:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:28:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:28:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:06.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:06 np0005466030 nova_compute[230518]: 2025-10-02 12:28:06.172 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:28:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:28:07 np0005466030 nova_compute[230518]: 2025-10-02 12:28:07.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:08.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:08.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:08 np0005466030 nova_compute[230518]: 2025-10-02 12:28:08.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:10.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:10.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:10 np0005466030 nova_compute[230518]: 2025-10-02 12:28:10.609 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408075.6075075, be855518-90af-4fa9-b969-4a1579934010 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:10 np0005466030 nova_compute[230518]: 2025-10-02 12:28:10.609 2 INFO nova.compute.manager [-] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:28:10 np0005466030 nova_compute[230518]: 2025-10-02 12:28:10.641 2 DEBUG nova.compute.manager [None req-025108b5-c8eb-4204-9d3a-b23b39b20862 - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:11 np0005466030 podman[256438]: 2025-10-02 12:28:11.847705615 +0000 UTC m=+0.085521382 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:28:12 np0005466030 podman[256457]: 2025-10-02 12:28:12.027225143 +0000 UTC m=+0.127253064 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:28:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:12.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:12 np0005466030 nova_compute[230518]: 2025-10-02 12:28:12.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:13 np0005466030 nova_compute[230518]: 2025-10-02 12:28:13.332 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408078.3309495, 78888fa8-1051-485d-9e51-feaec2648c8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:13 np0005466030 nova_compute[230518]: 2025-10-02 12:28:13.332 2 INFO nova.compute.manager [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:28:13 np0005466030 nova_compute[230518]: 2025-10-02 12:28:13.354 2 DEBUG nova.compute.manager [None req-f49e83be-0622-42c4-b558-30dc1d764d1e - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:13 np0005466030 nova_compute[230518]: 2025-10-02 12:28:13.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.025 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.026 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:14.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.173 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.265 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.266 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.273 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.274 2 INFO nova.compute.claims [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.435 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/253313661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.892 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.898 2 DEBUG nova.compute.provider_tree [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:14 np0005466030 nova_compute[230518]: 2025-10-02 12:28:14.984 2 DEBUG nova.scheduler.client.report [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.165 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.166 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.302 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.303 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.407 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.481 2 DEBUG nova.policy [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eff0431e92464c78b780c8365e6e920c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.519 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.719 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.720 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.720 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Creating image(s)#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.747 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.771 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.794 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.796 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.873 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.874 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.875 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.875 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.955 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:15 np0005466030 nova_compute[230518]: 2025-10-02 12:28:15.958 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e408d787-b02c-4b28-9af7-b7ae07b54538_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:16.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:16 np0005466030 nova_compute[230518]: 2025-10-02 12:28:16.568 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e408d787-b02c-4b28-9af7-b7ae07b54538_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:16 np0005466030 nova_compute[230518]: 2025-10-02 12:28:16.654 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] resizing rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:28:16 np0005466030 nova_compute[230518]: 2025-10-02 12:28:16.775 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Successfully created port: a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.048 2 DEBUG nova.objects.instance [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'migration_context' on Instance uuid e408d787-b02c-4b28-9af7-b7ae07b54538 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.074 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.075 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Ensure instance console log exists: /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.076 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.076 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.076 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.448 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Successfully updated port: a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.464 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.465 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquired lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.466 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.675 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.766 2 DEBUG nova.compute.manager [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-changed-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.766 2 DEBUG nova.compute.manager [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Refreshing instance network info cache due to event network-changed-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:17 np0005466030 nova_compute[230518]: 2025-10-02 12:28:17.766 2 DEBUG oslo_concurrency.lockutils [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:18.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:18 np0005466030 nova_compute[230518]: 2025-10-02 12:28:18.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.063 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updating instance_info_cache with network_info: [{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.197 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Releasing lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.198 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance network_info: |[{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.198 2 DEBUG oslo_concurrency.lockutils [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.198 2 DEBUG nova.network.neutron [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Refreshing network info cache for port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.201 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start _get_guest_xml network_info=[{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.206 2 WARNING nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.210 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.211 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.214 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.215 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.218 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.218 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.218 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.220 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3400719442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.696 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.721 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:19 np0005466030 nova_compute[230518]: 2025-10-02 12:28:19.725 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/569439342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.283 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.286 2 DEBUG nova.virt.libvirt.vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1894199701',id=62,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-qc9filwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:15Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=e408d787-b02c-4b28-9af7-b7ae07b54538,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.287 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.289 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.291 2 DEBUG nova.objects.instance [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e408d787-b02c-4b28-9af7-b7ae07b54538 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.316 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <uuid>e408d787-b02c-4b28-9af7-b7ae07b54538</uuid>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <name>instance-0000003e</name>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1894199701</nova:name>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:28:19</nova:creationTime>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:user uuid="eff0431e92464c78b780c8365e6e920c">tempest-ImagesOneServerNegativeTestJSON-883313902-project-member</nova:user>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:project uuid="bfd7bec5bd4b4366a96cc55cfe95fcc9">tempest-ImagesOneServerNegativeTestJSON-883313902</nova:project>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <nova:port uuid="a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <entry name="serial">e408d787-b02c-4b28-9af7-b7ae07b54538</entry>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <entry name="uuid">e408d787-b02c-4b28-9af7-b7ae07b54538</entry>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/e408d787-b02c-4b28-9af7-b7ae07b54538_disk">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:63:17:b5"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <target dev="tapa3ec450c-ad"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/console.log" append="off"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:28:20 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:28:20 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:28:20 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:28:20 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.318 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Preparing to wait for external event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.318 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.318 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.319 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.319 2 DEBUG nova.virt.libvirt.vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1894199701',id=62,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-qc9filwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:15Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=e408d787-b02c-4b28-9af7-b7ae07b54538,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.320 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.320 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.321 2 DEBUG os_vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ec450c-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3ec450c-ad, col_values=(('external_ids', {'iface-id': 'a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:17:b5', 'vm-uuid': 'e408d787-b02c-4b28-9af7-b7ae07b54538'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:20 np0005466030 NetworkManager[44960]: <info>  [1759408100.3349] manager: (tapa3ec450c-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.342 2 INFO os_vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad')#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.455 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.455 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.456 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No VIF found with MAC fa:16:3e:63:17:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.456 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Using config drive#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.485 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.765 2 DEBUG nova.network.neutron [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updated VIF entry in instance network info cache for port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.765 2 DEBUG nova.network.neutron [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updating instance_info_cache with network_info: [{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:20 np0005466030 nova_compute[230518]: 2025-10-02 12:28:20.797 2 DEBUG oslo_concurrency.lockutils [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:21 np0005466030 nova_compute[230518]: 2025-10-02 12:28:21.033 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Creating config drive at /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config#033[00m
Oct  2 08:28:21 np0005466030 nova_compute[230518]: 2025-10-02 12:28:21.043 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9koygc7g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:21 np0005466030 nova_compute[230518]: 2025-10-02 12:28:21.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:21 np0005466030 nova_compute[230518]: 2025-10-02 12:28:21.181 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9koygc7g" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:21 np0005466030 nova_compute[230518]: 2025-10-02 12:28:21.209 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:21 np0005466030 nova_compute[230518]: 2025-10-02 12:28:21.212 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:21 np0005466030 nova_compute[230518]: 2025-10-02 12:28:21.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:21 np0005466030 podman[256798]: 2025-10-02 12:28:21.809632325 +0000 UTC m=+0.059803303 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  2 08:28:21 np0005466030 podman[256799]: 2025-10-02 12:28:21.842038255 +0000 UTC m=+0.078058008 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:28:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:22.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:22.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.400 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.401 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deleting local config drive /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config because it was imported into RBD.#033[00m
Oct  2 08:28:22 np0005466030 kernel: tapa3ec450c-ad: entered promiscuous mode
Oct  2 08:28:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:22Z|00282|binding|INFO|Claiming lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for this chassis.
Oct  2 08:28:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:22Z|00283|binding|INFO|a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6: Claiming fa:16:3e:63:17:b5 10.100.0.12
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 NetworkManager[44960]: <info>  [1759408102.4765] manager: (tapa3ec450c-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.497 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:17:b5 10.100.0.12'], port_security=['fa:16:3e:63:17:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e408d787-b02c-4b28-9af7-b7ae07b54538', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.500 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 bound to our chassis#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.503 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.515 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdc45e6-dfcf-4c1b-b280-ed0de4a05ac5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.516 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeefd67eb-b1 in ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:28:22 np0005466030 systemd-machined[188247]: New machine qemu-32-instance-0000003e.
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.518 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeefd67eb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.518 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cbeba7be-981a-4c3c-ae25-13e07242071c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[21dd0a5c-1460-4e9e-b29c-05636749a291]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.533 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[274e6fa6-69db-4ff8-a80b-5d7b3b782b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:22Z|00284|binding|INFO|Setting lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 ovn-installed in OVS
Oct  2 08:28:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:22Z|00285|binding|INFO|Setting lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 up in Southbound
Oct  2 08:28:22 np0005466030 systemd[1]: Started Virtual Machine qemu-32-instance-0000003e.
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.553 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c66e799b-9caf-42cc-8e99-a7e0774112f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 systemd-udevd[256853]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:22 np0005466030 NetworkManager[44960]: <info>  [1759408102.5734] device (tapa3ec450c-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:28:22 np0005466030 NetworkManager[44960]: <info>  [1759408102.5748] device (tapa3ec450c-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.584 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a29890ea-db36-4657-bccf-93a9dd1274c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 NetworkManager[44960]: <info>  [1759408102.5899] manager: (tapeefd67eb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/132)
Oct  2 08:28:22 np0005466030 systemd-udevd[256857]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d61af137-d5da-4bcf-98ef-efc53bf9d6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.621 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b781dc-36de-449e-a831-f9c703687db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.625 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a541e6d8-7014-4ebf-bac5-9b6b8bf451c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 NetworkManager[44960]: <info>  [1759408102.6510] device (tapeefd67eb-b0): carrier: link connected
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.656 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[55b05721-5293-4943-ae3d-e7a68f46efad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.676 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b1bb76-0d13-44b6-bd88-5d2101304bbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596620, 'reachable_time': 24021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256883, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[daee81dd-da10-4a37-8c10-1f8ade2dde01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:db93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596620, 'tstamp': 596620}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256884, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.714 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1268b5-629d-415e-b131-989b6d6649b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596620, 'reachable_time': 24021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256885, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.745 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1603d68a-ede8-4f90-af70-69307878c5f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.807 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af0e658b-1c64-415b-9e76-e6ca3960d8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.809 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.809 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.809 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeefd67eb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 NetworkManager[44960]: <info>  [1759408102.8123] manager: (tapeefd67eb-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Oct  2 08:28:22 np0005466030 kernel: tapeefd67eb-b0: entered promiscuous mode
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.815 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeefd67eb-b0, col_values=(('external_ids', {'iface-id': '4a1c64ee-2e43-4924-ad64-0ba8b656d152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:22Z|00286|binding|INFO|Releasing lport 4a1c64ee-2e43-4924-ad64-0ba8b656d152 from this chassis (sb_readonly=0)
Oct  2 08:28:22 np0005466030 nova_compute[230518]: 2025-10-02 12:28:22.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.846 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3098455d-c3b7-4959-a1de-d96c0b87511f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.849 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:28:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.850 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'env', 'PROCESS_TAG=haproxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:28:23 np0005466030 podman[256959]: 2025-10-02 12:28:23.269781409 +0000 UTC m=+0.060608998 container create 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:28:23 np0005466030 systemd[1]: Started libpod-conmon-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682.scope.
Oct  2 08:28:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:23 np0005466030 podman[256959]: 2025-10-02 12:28:23.243818743 +0000 UTC m=+0.034646362 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:28:23 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:28:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac381c0e8614aae9b537a6ede1bffe087d0774e8c5a7a70ab53ff54e2364b0f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:28:23 np0005466030 podman[256959]: 2025-10-02 12:28:23.357401157 +0000 UTC m=+0.148228786 container init 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:28:23 np0005466030 podman[256959]: 2025-10-02 12:28:23.36324538 +0000 UTC m=+0.154072979 container start 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:28:23 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : New worker (256980) forked
Oct  2 08:28:23 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : Loading success.
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.499 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.499 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.500 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.500 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.501 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Processing event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.501 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.502 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.502 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.503 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.503 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] No waiting events found dispatching network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.504 2 WARNING nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received unexpected event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.614 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408103.613883, e408d787-b02c-4b28-9af7-b7ae07b54538 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.615 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Started (Lifecycle Event)#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.618 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.623 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.628 2 INFO nova.virt.libvirt.driver [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance spawned successfully.#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.628 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.650 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.657 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.663 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.663 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.664 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.664 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.665 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.665 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.707 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.708 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408103.6141589, e408d787-b02c-4b28-9af7-b7ae07b54538 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.708 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.810 2 INFO nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 8.09 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.811 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.816 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.818 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408103.6217616, e408d787-b02c-4b28-9af7-b7ae07b54538 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.818 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.942 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.946 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:23 np0005466030 nova_compute[230518]: 2025-10-02 12:28:23.999 2 INFO nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 9.77 seconds to build instance.#033[00m
Oct  2 08:28:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:24.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:24 np0005466030 nova_compute[230518]: 2025-10-02 12:28:24.394 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:25 np0005466030 nova_compute[230518]: 2025-10-02 12:28:25.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:25.925 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:26.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:26.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:27 np0005466030 nova_compute[230518]: 2025-10-02 12:28:27.411 2 DEBUG nova.compute.manager [None req-0f82ea2b-d79e-4c61-96b0-c08adc58b4a0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:27 np0005466030 nova_compute[230518]: 2025-10-02 12:28:27.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:27 np0005466030 nova_compute[230518]: 2025-10-02 12:28:27.549 2 INFO nova.compute.manager [None req-0f82ea2b-d79e-4c61-96b0-c08adc58b4a0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] instance snapshotting#033[00m
Oct  2 08:28:27 np0005466030 nova_compute[230518]: 2025-10-02 12:28:27.856 2 WARNING nova.compute.manager [None req-0f82ea2b-d79e-4c61-96b0-c08adc58b4a0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Image not found during snapshot: nova.exception.ImageNotFound: Image 79274edd-028c-4cfd-9fc0-5c8ea54a5ee4 could not be found.#033[00m
Oct  2 08:28:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:28.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:28.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Oct  2 08:28:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:30.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:30.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.782 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.783 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.783 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.783 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.784 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.785 2 INFO nova.compute.manager [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Terminating instance#033[00m
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.785 2 DEBUG nova.compute.manager [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:28:30 np0005466030 kernel: tapa3ec450c-ad (unregistering): left promiscuous mode
Oct  2 08:28:30 np0005466030 NetworkManager[44960]: <info>  [1759408110.9514] device (tapa3ec450c-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:30 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:30Z|00287|binding|INFO|Releasing lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 from this chassis (sb_readonly=0)
Oct  2 08:28:30 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:30Z|00288|binding|INFO|Setting lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 down in Southbound
Oct  2 08:28:30 np0005466030 ovn_controller[129257]: 2025-10-02T12:28:30Z|00289|binding|INFO|Removing iface tapa3ec450c-ad ovn-installed in OVS
Oct  2 08:28:30 np0005466030 nova_compute[230518]: 2025-10-02 12:28:30.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:31 np0005466030 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct  2 08:28:31 np0005466030 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000003e.scope: Consumed 8.314s CPU time.
Oct  2 08:28:31 np0005466030 systemd-machined[188247]: Machine qemu-32-instance-0000003e terminated.
Oct  2 08:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.049 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:17:b5 10.100.0.12'], port_security=['fa:16:3e:63:17:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e408d787-b02c-4b28-9af7-b7ae07b54538', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.050 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 unbound from our chassis#033[00m
Oct  2 08:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.051 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.052 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[02375aed-9b70-4d7a-aefc-c6b0742e181e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace which is not needed anymore#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.234 2 INFO nova.virt.libvirt.driver [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance destroyed successfully.#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.235 2 DEBUG nova.objects.instance [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'resources' on Instance uuid e408d787-b02c-4b28-9af7-b7ae07b54538 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.346 2 DEBUG nova.virt.libvirt.vif [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1894199701',id=62,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-qc9filwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:28:27Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=e408d787-b02c-4b28-9af7-b7ae07b54538,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.346 2 DEBUG nova.network.os_vif_util [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.347 2 DEBUG nova.network.os_vif_util [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.348 2 DEBUG os_vif [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ec450c-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.355 2 INFO os_vif [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad')#033[00m
Oct  2 08:28:31 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : haproxy version is 2.8.14-c23fe91
Oct  2 08:28:31 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : path to executable is /usr/sbin/haproxy
Oct  2 08:28:31 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [WARNING]  (256978) : Exiting Master process...
Oct  2 08:28:31 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [WARNING]  (256978) : Exiting Master process...
Oct  2 08:28:31 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [ALERT]    (256978) : Current worker (256980) exited with code 143 (Terminated)
Oct  2 08:28:31 np0005466030 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [WARNING]  (256978) : All workers exited. Exiting... (0)
Oct  2 08:28:31 np0005466030 systemd[1]: libpod-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682.scope: Deactivated successfully.
Oct  2 08:28:31 np0005466030 podman[257014]: 2025-10-02 12:28:31.39063313 +0000 UTC m=+0.220231480 container died 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.646 2 DEBUG nova.compute.manager [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-unplugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.647 2 DEBUG oslo_concurrency.lockutils [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.647 2 DEBUG oslo_concurrency.lockutils [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.648 2 DEBUG oslo_concurrency.lockutils [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.649 2 DEBUG nova.compute.manager [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] No waiting events found dispatching network-vif-unplugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:31 np0005466030 nova_compute[230518]: 2025-10-02 12:28:31.650 2 DEBUG nova.compute.manager [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-unplugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:28:31 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682-userdata-shm.mount: Deactivated successfully.
Oct  2 08:28:31 np0005466030 systemd[1]: var-lib-containers-storage-overlay-ac381c0e8614aae9b537a6ede1bffe087d0774e8c5a7a70ab53ff54e2364b0f9-merged.mount: Deactivated successfully.
Oct  2 08:28:32 np0005466030 podman[257014]: 2025-10-02 12:28:32.051042271 +0000 UTC m=+0.880640611 container cleanup 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:28:32 np0005466030 systemd[1]: libpod-conmon-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682.scope: Deactivated successfully.
Oct  2 08:28:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:32.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:32.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:32 np0005466030 podman[257070]: 2025-10-02 12:28:32.193185013 +0000 UTC m=+0.103544629 container remove 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.202 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[00de373c-fca5-4d3c-b80f-b7cfa13d8728]: (4, ('Thu Oct  2 12:28:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682)\n59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682\nThu Oct  2 12:28:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682)\n59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.205 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab272cd9-f269-4820-8aa1-f1c88f9e02ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.206 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:32 np0005466030 nova_compute[230518]: 2025-10-02 12:28:32.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:32 np0005466030 kernel: tapeefd67eb-b0: left promiscuous mode
Oct  2 08:28:32 np0005466030 nova_compute[230518]: 2025-10-02 12:28:32.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.236 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f58fe74-aa8c-495c-b6f2-4b382c3afac9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.268 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8957af-2411-4ae7-9edc-91efcaf2f72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.270 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[587772f5-7178-469e-b412-d130010d80c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.290 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[117f5233-1fc0-4535-bdf5-ce132ffd46b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596613, 'reachable_time': 20590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257084, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:32 np0005466030 systemd[1]: run-netns-ovnmeta\x2deefd67eb\x2db4b6\x2d4162\x2dbbdd\x2d0cce7dbdb491.mount: Deactivated successfully.
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.293 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:28:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.293 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6750b3-134a-44ba-b777-eccd61608c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:28:32 np0005466030 nova_compute[230518]: 2025-10-02 12:28:32.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:34.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:34 np0005466030 nova_compute[230518]: 2025-10-02 12:28:34.136 2 DEBUG nova.compute.manager [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:34 np0005466030 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG oslo_concurrency.lockutils [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:34 np0005466030 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG oslo_concurrency.lockutils [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:34 np0005466030 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG oslo_concurrency.lockutils [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:34 np0005466030 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG nova.compute.manager [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] No waiting events found dispatching network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:28:34 np0005466030 nova_compute[230518]: 2025-10-02 12:28:34.137 2 WARNING nova.compute.manager [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received unexpected event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:28:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:34.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Oct  2 08:28:36 np0005466030 nova_compute[230518]: 2025-10-02 12:28:36.068 2 INFO nova.virt.libvirt.driver [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deleting instance files /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538_del#033[00m
Oct  2 08:28:36 np0005466030 nova_compute[230518]: 2025-10-02 12:28:36.069 2 INFO nova.virt.libvirt.driver [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deletion of /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538_del complete#033[00m
Oct  2 08:28:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:36.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:36 np0005466030 nova_compute[230518]: 2025-10-02 12:28:36.116 2 INFO nova.compute.manager [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 5.33 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:28:36 np0005466030 nova_compute[230518]: 2025-10-02 12:28:36.117 2 DEBUG oslo.service.loopingcall [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:28:36 np0005466030 nova_compute[230518]: 2025-10-02 12:28:36.118 2 DEBUG nova.compute.manager [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:28:36 np0005466030 nova_compute[230518]: 2025-10-02 12:28:36.118 2 DEBUG nova.network.neutron [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:28:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:36.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:36 np0005466030 nova_compute[230518]: 2025-10-02 12:28:36.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:37 np0005466030 nova_compute[230518]: 2025-10-02 12:28:37.448 2 DEBUG nova.network.neutron [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:37 np0005466030 nova_compute[230518]: 2025-10-02 12:28:37.466 2 INFO nova.compute.manager [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 1.35 seconds to deallocate network for instance.#033[00m
Oct  2 08:28:37 np0005466030 nova_compute[230518]: 2025-10-02 12:28:37.562 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:37 np0005466030 nova_compute[230518]: 2025-10-02 12:28:37.563 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:37 np0005466030 nova_compute[230518]: 2025-10-02 12:28:37.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:37 np0005466030 nova_compute[230518]: 2025-10-02 12:28:37.639 2 DEBUG oslo_concurrency.processutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:37 np0005466030 nova_compute[230518]: 2025-10-02 12:28:37.705 2 DEBUG nova.compute.manager [req-3007abfc-a5a7-4a36-a40e-feca216822a6 req-4c817e2c-34a1-4cf0-aaea-013df145f3ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-deleted-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184234749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:38 np0005466030 nova_compute[230518]: 2025-10-02 12:28:38.073 2 DEBUG oslo_concurrency.processutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:38 np0005466030 nova_compute[230518]: 2025-10-02 12:28:38.084 2 DEBUG nova.compute.provider_tree [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:38 np0005466030 nova_compute[230518]: 2025-10-02 12:28:38.108 2 DEBUG nova.scheduler.client.report [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:38.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:38 np0005466030 nova_compute[230518]: 2025-10-02 12:28:38.133 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:38.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:38 np0005466030 nova_compute[230518]: 2025-10-02 12:28:38.171 2 INFO nova.scheduler.client.report [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Deleted allocations for instance e408d787-b02c-4b28-9af7-b7ae07b54538#033[00m
Oct  2 08:28:38 np0005466030 nova_compute[230518]: 2025-10-02 12:28:38.234 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:40.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:41 np0005466030 nova_compute[230518]: 2025-10-02 12:28:41.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.077 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.078 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:42.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:42.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2569227816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.535 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:42 np0005466030 podman[257133]: 2025-10-02 12:28:42.65031415 +0000 UTC m=+0.067236167 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:28:42 np0005466030 podman[257132]: 2025-10-02 12:28:42.692938941 +0000 UTC m=+0.103097696 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.732 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.733 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4640MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.734 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.734 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.842 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.843 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:28:42 np0005466030 nova_compute[230518]: 2025-10-02 12:28:42.867 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184975754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:43 np0005466030 nova_compute[230518]: 2025-10-02 12:28:43.343 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:43 np0005466030 nova_compute[230518]: 2025-10-02 12:28:43.352 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:43 np0005466030 nova_compute[230518]: 2025-10-02 12:28:43.373 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:43 np0005466030 nova_compute[230518]: 2025-10-02 12:28:43.406 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:28:43 np0005466030 nova_compute[230518]: 2025-10-02 12:28:43.407 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:44.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:44 np0005466030 nova_compute[230518]: 2025-10-02 12:28:44.402 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:45 np0005466030 nova_compute[230518]: 2025-10-02 12:28:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:46 np0005466030 nova_compute[230518]: 2025-10-02 12:28:46.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:46.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:46.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:46 np0005466030 nova_compute[230518]: 2025-10-02 12:28:46.231 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408111.230305, e408d787-b02c-4b28-9af7-b7ae07b54538 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:46 np0005466030 nova_compute[230518]: 2025-10-02 12:28:46.231 2 INFO nova.compute.manager [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:28:46 np0005466030 nova_compute[230518]: 2025-10-02 12:28:46.311 2 DEBUG nova.compute.manager [None req-2881662f-9c51-434b-832e-e00ba38f3275 - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:46 np0005466030 nova_compute[230518]: 2025-10-02 12:28:46.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.103 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.103 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.124 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.207 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.207 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.215 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.216 2 INFO nova.compute.claims [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.401 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3140232862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.865 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.872 2 DEBUG nova.compute.provider_tree [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.889 2 DEBUG nova.scheduler.client.report [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.931 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:47 np0005466030 nova_compute[230518]: 2025-10-02 12:28:47.932 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.005 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.006 2 DEBUG nova.network.neutron [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.024 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.048 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:28:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.206 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.208 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.209 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Creating image(s)#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.249 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.293 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.342 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.348 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:48.380 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:28:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:48.381 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.432 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.433 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.434 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.435 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.472 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.478 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.565 2 DEBUG nova.network.neutron [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  2 08:28:48 np0005466030 nova_compute[230518]: 2025-10-02 12:28:48.566 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:28:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:50 np0005466030 nova_compute[230518]: 2025-10-02 12:28:50.616 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:50 np0005466030 nova_compute[230518]: 2025-10-02 12:28:50.722 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] resizing rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.255 2 DEBUG nova.objects.instance [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'migration_context' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.288 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.289 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Ensure instance console log exists: /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.290 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.290 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.291 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.293 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.299 2 WARNING nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.304 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.305 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.308 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.309 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.311 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.312 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.312 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.313 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.313 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.314 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.314 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.315 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.315 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.315 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.316 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.316 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.321 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/639640980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.782 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.824 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:51 np0005466030 nova_compute[230518]: 2025-10-02 12:28:51.830 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:52.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:52.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1091874348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.248 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.251 2 DEBUG nova.objects.instance [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.267 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <uuid>55bd545d-c449-4749-a3f1-b04f0f37e06e</uuid>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <name>instance-00000040</name>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <nova:name>tempest-ListImageFiltersTestJSON-server-2039141641</nova:name>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:28:51</nova:creationTime>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <nova:user uuid="87db7657bb324d029ff3d66f218f1d8d">tempest-ListImageFiltersTestJSON-1602275258-project-member</nova:user>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <nova:project uuid="494736d8288b414094eb0bc6fbaa8cb7">tempest-ListImageFiltersTestJSON-1602275258</nova:project>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <entry name="serial">55bd545d-c449-4749-a3f1-b04f0f37e06e</entry>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <entry name="uuid">55bd545d-c449-4749-a3f1-b04f0f37e06e</entry>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/console.log" append="off"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:28:52 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:28:52 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:28:52 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:28:52 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.326 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.326 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.327 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Using config drive#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.354 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.575 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Creating config drive at /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.584 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixm3b6lo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.741 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixm3b6lo" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.789 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:52 np0005466030 nova_compute[230518]: 2025-10-02 12:28:52.793 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:52 np0005466030 podman[257470]: 2025-10-02 12:28:52.807173225 +0000 UTC m=+0.059733041 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:28:52 np0005466030 podman[257471]: 2025-10-02 12:28:52.821997411 +0000 UTC m=+0.069893720 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct  2 08:28:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.683 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.684 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.711 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.809 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.809 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.819 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.819 2 INFO nova.compute.claims [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:28:53 np0005466030 nova_compute[230518]: 2025-10-02 12:28:53.938 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.115 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.116 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deleting local config drive /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config because it was imported into RBD.#033[00m
Oct  2 08:28:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:54.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:54 np0005466030 systemd-machined[188247]: New machine qemu-33-instance-00000040.
Oct  2 08:28:54 np0005466030 systemd[1]: Started Virtual Machine qemu-33-instance-00000040.
Oct  2 08:28:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:28:54.384 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:28:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:28:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3896983913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.428 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.437 2 DEBUG nova.compute.provider_tree [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.457 2 DEBUG nova.scheduler.client.report [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.491 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.492 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.537 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.538 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.561 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.595 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.787 2 DEBUG nova.policy [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28d5425714b04888ba9e6112879fae33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.883 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.885 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.886 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating image(s)#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.925 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:54 np0005466030 nova_compute[230518]: 2025-10-02 12:28:54.965 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.006 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.011 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.108 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.110 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.113 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.114 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.158 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.164 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.446 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Successfully created port: ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.567 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408135.5668404, 55bd545d-c449-4749-a3f1-b04f0f37e06e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.568 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.572 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.572 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.576 2 INFO nova.virt.libvirt.driver [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance spawned successfully.#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.576 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.596 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.603 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.610 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.611 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.612 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.612 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.613 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.613 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.640 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.642 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408135.571295, 55bd545d-c449-4749-a3f1-b04f0f37e06e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.642 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] VM Started (Lifecycle Event)#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.669 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.673 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.682 2 INFO nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 7.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.682 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.690 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.733 2 INFO nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 8.56 seconds to build instance.#033[00m
Oct  2 08:28:55 np0005466030 nova_compute[230518]: 2025-10-02 12:28:55.748 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:56.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:56.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.577 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.633 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Successfully updated port: ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.634 2 DEBUG nova.compute.manager [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.642 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.671 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.671 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.671 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.688 2 INFO nova.compute.manager [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] instance snapshotting#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.737 2 DEBUG nova.objects.instance [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.751 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.751 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Ensure instance console log exists: /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.751 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.752 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.752 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:28:56 np0005466030 nova_compute[230518]: 2025-10-02 12:28:56.860 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:28:57 np0005466030 nova_compute[230518]: 2025-10-02 12:28:57.034 2 INFO nova.virt.libvirt.driver [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Beginning live snapshot process#033[00m
Oct  2 08:28:57 np0005466030 nova_compute[230518]: 2025-10-02 12:28:57.225 2 DEBUG nova.virt.libvirt.imagebackend [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:28:57 np0005466030 nova_compute[230518]: 2025-10-02 12:28:57.481 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(ef1ba0cf5dad4f22a36c049e1ae41a07) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:28:57 np0005466030 nova_compute[230518]: 2025-10-02 12:28:57.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:28:58 np0005466030 nova_compute[230518]: 2025-10-02 12:28:58.054 2 DEBUG nova.compute.manager [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-changed-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:28:58 np0005466030 nova_compute[230518]: 2025-10-02 12:28:58.055 2 DEBUG nova.compute.manager [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Refreshing instance network info cache due to event network-changed-ef24dbb6-1a67-4d96-a8a7-c34925dd3699. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:28:58 np0005466030 nova_compute[230518]: 2025-10-02 12:28:58.056 2 DEBUG oslo_concurrency.lockutils [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:28:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:28:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:58.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:28:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:28:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:28:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:58.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:28:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Oct  2 08:28:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.065 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] cloning vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk@ef1ba0cf5dad4f22a36c049e1ae41a07 to images/e63c0054-4f57-471e-a2d8-a929502f4104 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.388 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updating instance_info_cache with network_info: [{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.414 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.415 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance network_info: |[{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.415 2 DEBUG oslo_concurrency.lockutils [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.416 2 DEBUG nova.network.neutron [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Refreshing network info cache for port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.421 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start _get_guest_xml network_info=[{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.426 2 WARNING nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.433 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.434 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.438 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.438 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.440 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.441 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.441 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.442 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.443 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.443 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.444 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.444 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.445 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.445 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.446 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.446 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.450 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.802 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] flattening images/e63c0054-4f57-471e-a2d8-a929502f4104 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:28:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:28:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3081959650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.921 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.956 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:28:59 np0005466030 nova_compute[230518]: 2025-10-02 12:28:59.960 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:00.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:00.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/81590209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.478 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.479 2 DEBUG nova.virt.libvirt.vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:54Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.480 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.481 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.481 2 DEBUG nova.objects.instance [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.497 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <uuid>91d6698b-e355-4477-8680-f469bfd285a4</uuid>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <name>instance-00000043</name>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-912984471</nova:name>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:28:59</nova:creationTime>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <nova:port uuid="ef24dbb6-1a67-4d96-a8a7-c34925dd3699">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <entry name="serial">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <entry name="uuid">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk.config">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:6b:63:f8"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <target dev="tapef24dbb6-1a"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log" append="off"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:29:00 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:29:00 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:29:00 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:29:00 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.498 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Preparing to wait for external event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.498 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.499 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.499 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.500 2 DEBUG nova.virt.libvirt.vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:54Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.500 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.500 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.501 2 DEBUG os_vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.503 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] removing snapshot(ef1ba0cf5dad4f22a36c049e1ae41a07) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef24dbb6-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef24dbb6-1a, col_values=(('external_ids', {'iface-id': 'ef24dbb6-1a67-4d96-a8a7-c34925dd3699', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:63:f8', 'vm-uuid': '91d6698b-e355-4477-8680-f469bfd285a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466030 NetworkManager[44960]: <info>  [1759408140.5078] manager: (tapef24dbb6-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.512 2 INFO os_vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.561 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.562 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.562 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:6b:63:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.563 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Using config drive#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.590 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.709 2 DEBUG nova.network.neutron [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updated VIF entry in instance network info cache for port ef24dbb6-1a67-4d96-a8a7-c34925dd3699. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.710 2 DEBUG nova.network.neutron [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updating instance_info_cache with network_info: [{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:00 np0005466030 nova_compute[230518]: 2025-10-02 12:29:00.786 2 DEBUG oslo_concurrency.lockutils [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:01 np0005466030 nova_compute[230518]: 2025-10-02 12:29:01.061 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating config drive at /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config#033[00m
Oct  2 08:29:01 np0005466030 nova_compute[230518]: 2025-10-02 12:29:01.067 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx5rgizt2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:01 np0005466030 nova_compute[230518]: 2025-10-02 12:29:01.202 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx5rgizt2" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:01 np0005466030 nova_compute[230518]: 2025-10-02 12:29:01.241 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:01 np0005466030 nova_compute[230518]: 2025-10-02 12:29:01.245 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.114 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.870s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.115 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting local config drive /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config because it was imported into RBD.#033[00m
Oct  2 08:29:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:02.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:02 np0005466030 kernel: tapef24dbb6-1a: entered promiscuous mode
Oct  2 08:29:02 np0005466030 NetworkManager[44960]: <info>  [1759408142.1642] manager: (tapef24dbb6-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:02Z|00290|binding|INFO|Claiming lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for this chassis.
Oct  2 08:29:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:02Z|00291|binding|INFO|ef24dbb6-1a67-4d96-a8a7-c34925dd3699: Claiming fa:16:3e:6b:63:f8 10.100.0.7
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.176 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.178 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.184 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711#033[00m
Oct  2 08:29:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:02.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab6e91a-f67d-42b4-b193-5a4d7b5e047b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.196 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.198 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.198 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c3fb2e-30fa-46f5-82bb-b6c2231b3d88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.200 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[98e7e842-e5fd-43b4-b61a-8dfb391e459f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.211 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae12eac-97e2-4e0c-8a4b-1e1a158a9f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 systemd-udevd[258055]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:29:02 np0005466030 systemd-machined[188247]: New machine qemu-34-instance-00000043.
Oct  2 08:29:02 np0005466030 NetworkManager[44960]: <info>  [1759408142.2298] device (tapef24dbb6-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:29:02 np0005466030 NetworkManager[44960]: <info>  [1759408142.2319] device (tapef24dbb6-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:29:02 np0005466030 systemd[1]: Started Virtual Machine qemu-34-instance-00000043.
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.242 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[54d8ad79-b6e2-465d-9ad9-7641c94197d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:02Z|00292|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 ovn-installed in OVS
Oct  2 08:29:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:02Z|00293|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 up in Southbound
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.275 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6392f4d-85e8-4937-bc84-5a00792b48d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.281 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9288c3-645f-4c28-b30c-4a7f0650c2d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 NetworkManager[44960]: <info>  [1759408142.2821] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Oct  2 08:29:02 np0005466030 systemd-udevd[258058]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.311 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b950f679-3a4b-4d82-aa19-76678458f276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.313 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48eb6afd-b5bd-4567-9786-b2c2e3266da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 NetworkManager[44960]: <info>  [1759408142.3297] device (tape21cd6a6-f0): carrier: link connected
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.333 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef67604-11d3-402e-a1d2-12b09c713a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.347 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4964a55b-8b51-414c-9565-61371a49d30a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600588, 'reachable_time': 35377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258086, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.358 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[577f3d16-e15c-45cc-b2c5-41f69bf9ad05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600588, 'tstamp': 600588}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258087, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.370 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b99ef7-8d68-44ff-8598-51a95274032d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600588, 'reachable_time': 35377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258088, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.392 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[74bf6b13-7f01-43be-a4f8-08c7e635702d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.450 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(snap) on rbd image(e63c0054-4f57-471e-a2d8-a929502f4104) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.450 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4db8e3a4-2c3e-4c1c-a0ab-aebec9bb275f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.458 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.459 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.460 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:02 np0005466030 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct  2 08:29:02 np0005466030 NetworkManager[44960]: <info>  [1759408142.4644] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.468 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:02Z|00294|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.497 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.497 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1320fb28-805c-4629-a1c8-1b58e1a29099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.498 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:29:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.499 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.770 2 DEBUG nova.compute.manager [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.771 2 DEBUG oslo_concurrency.lockutils [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.772 2 DEBUG oslo_concurrency.lockutils [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.776 2 DEBUG oslo_concurrency.lockutils [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:02 np0005466030 nova_compute[230518]: 2025-10-02 12:29:02.776 2 DEBUG nova.compute.manager [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Processing event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:29:02 np0005466030 podman[258138]: 2025-10-02 12:29:02.909808793 +0000 UTC m=+0.059780592 container create b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:29:02 np0005466030 systemd[1]: Started libpod-conmon-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330.scope.
Oct  2 08:29:02 np0005466030 podman[258138]: 2025-10-02 12:29:02.875443402 +0000 UTC m=+0.025415231 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:29:02 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:29:02 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61a65ad1c1d3fc987ed3a39b2d4915277d9656d46d566e289ee28d19b267782/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:29:03 np0005466030 podman[258138]: 2025-10-02 12:29:03.017710888 +0000 UTC m=+0.167682687 container init b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:29:03 np0005466030 podman[258138]: 2025-10-02 12:29:03.024790061 +0000 UTC m=+0.174761840 container start b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:29:03 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : New worker (258184) forked
Oct  2 08:29:03 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : Loading success.
Oct  2 08:29:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.722 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.724 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408143.7217352, 91d6698b-e355-4477-8680-f469bfd285a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.725 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Started (Lifecycle Event)#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.730 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.737 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance spawned successfully.#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.738 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.752 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.763 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.767 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.768 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.768 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.768 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.769 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.769 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.801 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.802 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408143.7222707, 91d6698b-e355-4477-8680-f469bfd285a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.802 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.832 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.836 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408143.729117, 91d6698b-e355-4477-8680-f469bfd285a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.837 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.845 2 INFO nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 8.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.846 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.868 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.870 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.892 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.904 2 INFO nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 10.13 seconds to build instance.#033[00m
Oct  2 08:29:03 np0005466030 nova_compute[230518]: 2025-10-02 12:29:03.926 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Oct  2 08:29:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:04.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:04.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:04 np0005466030 nova_compute[230518]: 2025-10-02 12:29:04.601 2 DEBUG nova.compute.manager [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:04 np0005466030 nova_compute[230518]: 2025-10-02 12:29:04.602 2 DEBUG oslo_concurrency.lockutils [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:04 np0005466030 nova_compute[230518]: 2025-10-02 12:29:04.602 2 DEBUG oslo_concurrency.lockutils [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:04 np0005466030 nova_compute[230518]: 2025-10-02 12:29:04.603 2 DEBUG oslo_concurrency.lockutils [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:04 np0005466030 nova_compute[230518]: 2025-10-02 12:29:04.603 2 DEBUG nova.compute.manager [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:04 np0005466030 nova_compute[230518]: 2025-10-02 12:29:04.603 2 WARNING nova.compute.manager [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:29:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:04Z|00295|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:29:04 np0005466030 nova_compute[230518]: 2025-10-02 12:29:04.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:29:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:29:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:29:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:29:05 np0005466030 nova_compute[230518]: 2025-10-02 12:29:05.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.127 2 INFO nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Rebuilding instance#033[00m
Oct  2 08:29:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:06.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:06.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.584 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.608 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.659 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_requests' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.678 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.698 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.713 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.728 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:29:06 np0005466030 nova_compute[230518]: 2025-10-02 12:29:06.733 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:29:07 np0005466030 nova_compute[230518]: 2025-10-02 12:29:07.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:08.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:08.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:29:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:29:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:29:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:10.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:10 np0005466030 nova_compute[230518]: 2025-10-02 12:29:10.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:11 np0005466030 nova_compute[230518]: 2025-10-02 12:29:11.579 2 INFO nova.virt.libvirt.driver [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Snapshot image upload complete#033[00m
Oct  2 08:29:11 np0005466030 nova_compute[230518]: 2025-10-02 12:29:11.580 2 INFO nova.compute.manager [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 14.89 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:29:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:12.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:12.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:12 np0005466030 nova_compute[230518]: 2025-10-02 12:29:12.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:12 np0005466030 podman[258342]: 2025-10-02 12:29:12.836124294 +0000 UTC m=+0.076378044 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 08:29:12 np0005466030 podman[258341]: 2025-10-02 12:29:12.877210026 +0000 UTC m=+0.122715481 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:29:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Oct  2 08:29:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:14.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:15 np0005466030 nova_compute[230518]: 2025-10-02 12:29:15.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Oct  2 08:29:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:16.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:16 np0005466030 nova_compute[230518]: 2025-10-02 12:29:16.785 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:29:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:17Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:63:f8 10.100.0.7
Oct  2 08:29:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:17Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:63:f8 10.100.0.7
Oct  2 08:29:17 np0005466030 nova_compute[230518]: 2025-10-02 12:29:17.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:18.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Oct  2 08:29:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:20.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:20 np0005466030 nova_compute[230518]: 2025-10-02 12:29:20.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Oct  2 08:29:21 np0005466030 kernel: tapef24dbb6-1a (unregistering): left promiscuous mode
Oct  2 08:29:21 np0005466030 NetworkManager[44960]: <info>  [1759408161.7981] device (tapef24dbb6-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:29:21 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:21Z|00296|binding|INFO|Releasing lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 from this chassis (sb_readonly=0)
Oct  2 08:29:21 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:21Z|00297|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 down in Southbound
Oct  2 08:29:21 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:21Z|00298|binding|INFO|Removing iface tapef24dbb6-1a ovn-installed in OVS
Oct  2 08:29:21 np0005466030 nova_compute[230518]: 2025-10-02 12:29:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:21 np0005466030 nova_compute[230518]: 2025-10-02 12:29:21.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:21 np0005466030 nova_compute[230518]: 2025-10-02 12:29:21.819 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance shutdown successfully after 15 seconds.#033[00m
Oct  2 08:29:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.828 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.830 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis#033[00m
Oct  2 08:29:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.833 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.836 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11e82a19-0db9-4d74-bd80-61da2500cf58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.837 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore#033[00m
Oct  2 08:29:21 np0005466030 nova_compute[230518]: 2025-10-02 12:29:21.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:21 np0005466030 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct  2 08:29:21 np0005466030 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000043.scope: Consumed 13.970s CPU time.
Oct  2 08:29:21 np0005466030 systemd-machined[188247]: Machine qemu-34-instance-00000043 terminated.
Oct  2 08:29:22 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : haproxy version is 2.8.14-c23fe91
Oct  2 08:29:22 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : path to executable is /usr/sbin/haproxy
Oct  2 08:29:22 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [WARNING]  (258175) : Exiting Master process...
Oct  2 08:29:22 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [WARNING]  (258175) : Exiting Master process...
Oct  2 08:29:22 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [ALERT]    (258175) : Current worker (258184) exited with code 143 (Terminated)
Oct  2 08:29:22 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [WARNING]  (258175) : All workers exited. Exiting... (0)
Oct  2 08:29:22 np0005466030 systemd[1]: libpod-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330.scope: Deactivated successfully.
Oct  2 08:29:22 np0005466030 podman[258411]: 2025-10-02 12:29:22.027322854 +0000 UTC m=+0.057789210 container died b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.059 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance destroyed successfully.#033[00m
Oct  2 08:29:22 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330-userdata-shm.mount: Deactivated successfully.
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.075 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance destroyed successfully.#033[00m
Oct  2 08:29:22 np0005466030 systemd[1]: var-lib-containers-storage-overlay-d61a65ad1c1d3fc987ed3a39b2d4915277d9656d46d566e289ee28d19b267782-merged.mount: Deactivated successfully.
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.080 2 DEBUG nova.virt.libvirt.vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:05Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.080 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.083 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.084 2 DEBUG os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef24dbb6-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:22 np0005466030 podman[258411]: 2025-10-02 12:29:22.141765156 +0000 UTC m=+0.172231502 container cleanup b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.142 2 DEBUG nova.compute.manager [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG oslo_concurrency.lockutils [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG oslo_concurrency.lockutils [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG oslo_concurrency.lockutils [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG nova.compute.manager [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.144 2 WARNING nova.compute.manager [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:22 np0005466030 systemd[1]: libpod-conmon-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330.scope: Deactivated successfully.
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.150 2 INFO os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')#033[00m
Oct  2 08:29:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:22.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:22 np0005466030 podman[258471]: 2025-10-02 12:29:22.230887869 +0000 UTC m=+0.062177467 container remove b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.237 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[248597e8-2738-4362-8f80-4f012d673146]: (4, ('Thu Oct  2 12:29:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330)\nb12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330\nThu Oct  2 12:29:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330)\nb12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.239 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[10ed46f6-48f1-43ac-a610-14cbce984532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.240 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:22 np0005466030 kernel: tape21cd6a6-f0: left promiscuous mode
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.258 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bde8dd-e4f5-4835-9d96-5f321c1b6e6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19de6e18-86d9-45d3-97dd-a0f037e5d662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.281 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[110f4472-a63f-4147-ba0c-7eb90bf93c7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.295 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bd26ac-542d-4425-95de-5a6c29fb7439]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600582, 'reachable_time': 23628, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258531, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:22 np0005466030 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.300 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:29:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.300 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d51e6e8c-08c8-4a01-9e59-e20ed62972ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:29:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:29:22 np0005466030 nova_compute[230518]: 2025-10-02 12:29:22.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:23 np0005466030 podman[258533]: 2025-10-02 12:29:23.813540478 +0000 UTC m=+0.065810792 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 08:29:23 np0005466030 podman[258534]: 2025-10-02 12:29:23.826174785 +0000 UTC m=+0.079039648 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:29:23 np0005466030 nova_compute[230518]: 2025-10-02 12:29:23.849 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting instance files /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del#033[00m
Oct  2 08:29:23 np0005466030 nova_compute[230518]: 2025-10-02 12:29:23.850 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deletion of /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del complete#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.091 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.092 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating image(s)#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.127 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.152 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.180 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.183 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:24.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.247 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.249 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.249 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.249 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.277 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.281 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.318 2 DEBUG nova.compute.manager [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.319 2 DEBUG oslo_concurrency.lockutils [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.319 2 DEBUG oslo_concurrency.lockutils [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.320 2 DEBUG oslo_concurrency.lockutils [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.320 2 DEBUG nova.compute.manager [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.320 2 WARNING nova.compute.manager [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.843 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:24 np0005466030 nova_compute[230518]: 2025-10-02 12:29:24.940 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.065 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.066 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Ensure instance console log exists: /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.066 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.067 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.067 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.069 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start _get_guest_xml network_info=[{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.074 2 WARNING nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.084 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.084 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.088 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.088 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.089 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.089 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.092 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.107 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/336891820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.553 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.594 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.602 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.860 2 DEBUG nova.compute.manager [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:25 np0005466030 nova_compute[230518]: 2025-10-02 12:29:25.914 2 INFO nova.compute.manager [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] instance snapshotting#033[00m
Oct  2 08:29:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:25.927 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:29:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1261912510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.069 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.071 2 DEBUG nova.virt.libvirt.vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:24Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.071 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.072 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.074 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <uuid>91d6698b-e355-4477-8680-f469bfd285a4</uuid>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <name>instance-00000043</name>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-912984471</nova:name>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:29:25</nova:creationTime>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <nova:port uuid="ef24dbb6-1a67-4d96-a8a7-c34925dd3699">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <entry name="serial">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <entry name="uuid">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk.config">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:6b:63:f8"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <target dev="tapef24dbb6-1a"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log" append="off"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:29:26 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:29:26 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:29:26 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:29:26 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Preparing to wait for external event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.076 2 DEBUG nova.virt.libvirt.vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:24Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.076 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.077 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.077 2 DEBUG os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.078 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.078 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef24dbb6-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef24dbb6-1a, col_values=(('external_ids', {'iface-id': 'ef24dbb6-1a67-4d96-a8a7-c34925dd3699', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:63:f8', 'vm-uuid': '91d6698b-e355-4477-8680-f469bfd285a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:26 np0005466030 NetworkManager[44960]: <info>  [1759408166.0842] manager: (tapef24dbb6-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.090 2 INFO os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.144 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.145 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.145 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:6b:63:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.146 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Using config drive#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.175 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.188 2 INFO nova.virt.libvirt.driver [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Beginning live snapshot process#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.193 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:26.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.227 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'keypairs' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.323 2 DEBUG nova.virt.libvirt.imagebackend [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.579 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating config drive at /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.588 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uyzws4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.635 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(0ffcb9a4a0ec4dec811b3b8306762f4f) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.742 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uyzws4e" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.783 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:29:26 np0005466030 nova_compute[230518]: 2025-10-02 12:29:26.788 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.236 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.238 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting local config drive /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config because it was imported into RBD.#033[00m
Oct  2 08:29:27 np0005466030 kernel: tapef24dbb6-1a: entered promiscuous mode
Oct  2 08:29:27 np0005466030 NetworkManager[44960]: <info>  [1759408167.3128] manager: (tapef24dbb6-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Oct  2 08:29:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:27Z|00299|binding|INFO|Claiming lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for this chassis.
Oct  2 08:29:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:27Z|00300|binding|INFO|ef24dbb6-1a67-4d96-a8a7-c34925dd3699: Claiming fa:16:3e:6b:63:f8 10.100.0.7
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.324 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.326 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.328 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711#033[00m
Oct  2 08:29:27 np0005466030 systemd-udevd[258925]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.343 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ac115b-8512-4b2f-a335-7a47a81138a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.345 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.347 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a133a4-6bb0-4fc5-9053-16cc2e398b6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:27Z|00301|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 ovn-installed in OVS
Oct  2 08:29:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:27Z|00302|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 up in Southbound
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1b4996-51aa-4c92-a160-6f2b1e23ccb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 systemd-machined[188247]: New machine qemu-35-instance-00000043.
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.366 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[8c80547b-4b17-4d4d-a19d-2ef282ac3e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 NetworkManager[44960]: <info>  [1759408167.3673] device (tapef24dbb6-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:29:27 np0005466030 NetworkManager[44960]: <info>  [1759408167.3681] device (tapef24dbb6-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:29:27 np0005466030 systemd[1]: Started Virtual Machine qemu-35-instance-00000043.
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.396 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5015ee75-86ba-479f-86f1-008d2013e694]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.432 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[49e652e2-235a-4601-9ba2-82d793f98fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.438 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e67b97-cdd7-4510-803b-e395bf5158ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 NetworkManager[44960]: <info>  [1759408167.4395] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.486 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[aa487d2a-0c69-44b2-9440-08b6aa09cad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.490 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb395d0-a8f9-4551-9ae4-740f43021298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 NetworkManager[44960]: <info>  [1759408167.5267] device (tape21cd6a6-f0): carrier: link connected
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.537 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c74d62d4-0708-460d-bb06-874e211e0d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.564 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc0d4b0-c1c2-480d-b4e5-371f01f22f8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603108, 'reachable_time': 34127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258959, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a532842c-8317-4a84-bf70-818d1a71552d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603108, 'tstamp': 603108}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258960, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.592 2 DEBUG nova.compute.manager [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.592 2 DEBUG oslo_concurrency.lockutils [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.593 2 DEBUG oslo_concurrency.lockutils [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.593 2 DEBUG oslo_concurrency.lockutils [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.594 2 DEBUG nova.compute.manager [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Processing event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.616 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b804f1-bc4a-404a-8489-faa25f27eec6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603108, 'reachable_time': 34127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258961, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.664 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cf1209-0b0e-4b44-9211-312bab487a60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.723 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b31e7fa9-9c07-4ea7-8fd5-8c159e0ab111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.725 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.725 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.726 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:27 np0005466030 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct  2 08:29:27 np0005466030 NetworkManager[44960]: <info>  [1759408167.7296] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.733 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:27Z|00303|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct  2 08:29:27 np0005466030 nova_compute[230518]: 2025-10-02 12:29:27.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.766 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.767 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b719e8-2c40-49e8-9dec-bf05e55a7949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.768 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:29:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.769 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:29:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.096 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] cloning vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk@0ffcb9a4a0ec4dec811b3b8306762f4f to images/d0dced0b-1a65-4528-9713-633878d7128b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:29:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:28.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:28 np0005466030 podman[259008]: 2025-10-02 12:29:28.201926053 +0000 UTC m=+0.065953626 container create f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:29:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:28.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:28 np0005466030 podman[259008]: 2025-10-02 12:29:28.164858326 +0000 UTC m=+0.028885889 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.263 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] flattening images/d0dced0b-1a65-4528-9713-633878d7128b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:29:28 np0005466030 systemd[1]: Started libpod-conmon-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676.scope.
Oct  2 08:29:28 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:29:28 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab9c1cc5c842b2f1033e688001ccfde90870661a665762c428a425ee99c5e7a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:29:28 np0005466030 podman[259008]: 2025-10-02 12:29:28.312541233 +0000 UTC m=+0.176568797 container init f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:29:28 np0005466030 podman[259008]: 2025-10-02 12:29:28.319274635 +0000 UTC m=+0.183302168 container start f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:29:28 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : New worker (259105) forked
Oct  2 08:29:28 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : Loading success.
Oct  2 08:29:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.841 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.842 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 91d6698b-e355-4477-8680-f469bfd285a4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.842 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408168.8406715, 91d6698b-e355-4477-8680-f469bfd285a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.842 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Started (Lifecycle Event)#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.847 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.851 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance spawned successfully.#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.851 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.873 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.876 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.884 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.884 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.884 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.885 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.885 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.885 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.947 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.947 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408168.84158, 91d6698b-e355-4477-8680-f469bfd285a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.947 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:29:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.982 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.985 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408168.846325, 91d6698b-e355-4477-8680-f469bfd285a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:28 np0005466030 nova_compute[230518]: 2025-10-02 12:29:28.985 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.010 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.015 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.020 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.048 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.092 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.093 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.093 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.193 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.709 2 DEBUG nova.compute.manager [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.709 2 DEBUG oslo_concurrency.lockutils [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.710 2 DEBUG oslo_concurrency.lockutils [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.710 2 DEBUG oslo_concurrency.lockutils [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.710 2 DEBUG nova.compute.manager [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:29 np0005466030 nova_compute[230518]: 2025-10-02 12:29:29.711 2 WARNING nova.compute.manager [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:29:30 np0005466030 nova_compute[230518]: 2025-10-02 12:29:30.009 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] removing snapshot(0ffcb9a4a0ec4dec811b3b8306762f4f) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:29:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:30.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:30.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:31 np0005466030 nova_compute[230518]: 2025-10-02 12:29:31.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Oct  2 08:29:31 np0005466030 nova_compute[230518]: 2025-10-02 12:29:31.573 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(snap) on rbd image(d0dced0b-1a65-4528-9713-633878d7128b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:29:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:32.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:32.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.277 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.277 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.277 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.278 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.278 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.279 2 INFO nova.compute.manager [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Terminating instance#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.280 2 DEBUG nova.compute.manager [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:29:32 np0005466030 kernel: tapef24dbb6-1a (unregistering): left promiscuous mode
Oct  2 08:29:32 np0005466030 NetworkManager[44960]: <info>  [1759408172.3345] device (tapef24dbb6-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:32Z|00304|binding|INFO|Releasing lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 from this chassis (sb_readonly=0)
Oct  2 08:29:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:32Z|00305|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 down in Southbound
Oct  2 08:29:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:29:32Z|00306|binding|INFO|Removing iface tapef24dbb6-1a ovn-installed in OVS
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.354 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.355 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.356 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.357 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18e37939-e8c1-4ffd-b5ac-fb7a788e53f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct  2 08:29:32 np0005466030 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000043.scope: Consumed 4.886s CPU time.
Oct  2 08:29:32 np0005466030 systemd-machined[188247]: Machine qemu-35-instance-00000043 terminated.
Oct  2 08:29:32 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : haproxy version is 2.8.14-c23fe91
Oct  2 08:29:32 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : path to executable is /usr/sbin/haproxy
Oct  2 08:29:32 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [WARNING]  (259102) : Exiting Master process...
Oct  2 08:29:32 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [ALERT]    (259102) : Current worker (259105) exited with code 143 (Terminated)
Oct  2 08:29:32 np0005466030 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [WARNING]  (259102) : All workers exited. Exiting... (0)
Oct  2 08:29:32 np0005466030 systemd[1]: libpod-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676.scope: Deactivated successfully.
Oct  2 08:29:32 np0005466030 podman[259180]: 2025-10-02 12:29:32.503090263 +0000 UTC m=+0.042776967 container died f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.522 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance destroyed successfully.#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.523 2 DEBUG nova.objects.instance [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676-userdata-shm.mount: Deactivated successfully.
Oct  2 08:29:32 np0005466030 systemd[1]: var-lib-containers-storage-overlay-bab9c1cc5c842b2f1033e688001ccfde90870661a665762c428a425ee99c5e7a-merged.mount: Deactivated successfully.
Oct  2 08:29:32 np0005466030 podman[259180]: 2025-10-02 12:29:32.552902381 +0000 UTC m=+0.092589095 container cleanup f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.555 2 DEBUG nova.virt.libvirt.vif [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:29Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.555 2 DEBUG nova.network.os_vif_util [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.556 2 DEBUG nova.network.os_vif_util [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.557 2 DEBUG os_vif [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef24dbb6-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.566 2 INFO os_vif [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')#033[00m
Oct  2 08:29:32 np0005466030 systemd[1]: libpod-conmon-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676.scope: Deactivated successfully.
Oct  2 08:29:32 np0005466030 podman[259218]: 2025-10-02 12:29:32.629977346 +0000 UTC m=+0.048302821 container remove f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.673 2 DEBUG nova.compute.manager [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.673 2 DEBUG oslo_concurrency.lockutils [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG oslo_concurrency.lockutils [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG oslo_concurrency.lockutils [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG nova.compute.manager [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG nova.compute.manager [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.673 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc9f3c5-85f2-4eb8-8a60-fe88c2a16855]: (4, ('Thu Oct  2 12:29:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676)\nf47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676\nThu Oct  2 12:29:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676)\nf47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.675 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3f0492-7fb1-4aa1-8f93-e87c5b91bb96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.676 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 kernel: tape21cd6a6-f0: left promiscuous mode
Oct  2 08:29:32 np0005466030 nova_compute[230518]: 2025-10-02 12:29:32.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.696 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fc8148-f12e-41c9-b4c3-1ef797d85759]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.742 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4d201059-3bed-47b6-b297-38a4d1f58c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.744 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e010b7a-034b-4d55-805a-fdfb60d6805c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.770 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1a8168-13ed-4d83-ab57-336436dc751b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603098, 'reachable_time': 24431, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259251, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:32 np0005466030 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.775 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:29:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.775 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[882d4d9e-7373-4d04-81d1-6c08b7793911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:29:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:33 np0005466030 nova_compute[230518]: 2025-10-02 12:29:33.641 2 INFO nova.virt.libvirt.driver [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting instance files /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del#033[00m
Oct  2 08:29:33 np0005466030 nova_compute[230518]: 2025-10-02 12:29:33.641 2 INFO nova.virt.libvirt.driver [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deletion of /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del complete#033[00m
Oct  2 08:29:33 np0005466030 nova_compute[230518]: 2025-10-02 12:29:33.691 2 INFO nova.compute.manager [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:29:33 np0005466030 nova_compute[230518]: 2025-10-02 12:29:33.691 2 DEBUG oslo.service.loopingcall [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:29:33 np0005466030 nova_compute[230518]: 2025-10-02 12:29:33.691 2 DEBUG nova.compute.manager [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:29:33 np0005466030 nova_compute[230518]: 2025-10-02 12:29:33.692 2 DEBUG nova.network.neutron [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:29:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:34.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:34.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.608 2 INFO nova.virt.libvirt.driver [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Snapshot image upload complete#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.609 2 INFO nova.compute.manager [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 8.69 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.905 2 DEBUG nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.905 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.906 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.907 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.907 2 DEBUG nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.907 2 WARNING nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:29:34 np0005466030 nova_compute[230518]: 2025-10-02 12:29:34.999 2 DEBUG nova.network.neutron [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.050 2 INFO nova.compute.manager [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 1.36 seconds to deallocate network for instance.#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.104 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.105 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.183 2 DEBUG oslo_concurrency.processutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1493501614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.653 2 DEBUG oslo_concurrency.processutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.660 2 DEBUG nova.compute.provider_tree [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.787 2 DEBUG nova.scheduler.client.report [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.865 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:35 np0005466030 nova_compute[230518]: 2025-10-02 12:29:35.976 2 INFO nova.scheduler.client.report [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocations for instance 91d6698b-e355-4477-8680-f469bfd285a4#033[00m
Oct  2 08:29:36 np0005466030 nova_compute[230518]: 2025-10-02 12:29:36.110 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:36.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:37 np0005466030 nova_compute[230518]: 2025-10-02 12:29:37.033 2 DEBUG nova.compute.manager [req-c3974bab-9576-44ee-8c36-4db9cafc684c req-892d3832-f097-40ee-9408-76af9650f416 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-deleted-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:29:37 np0005466030 nova_compute[230518]: 2025-10-02 12:29:37.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:37 np0005466030 nova_compute[230518]: 2025-10-02 12:29:37.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:38.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:38.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Oct  2 08:29:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:29:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:40.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:29:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:42.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:29:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:42.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:29:42 np0005466030 nova_compute[230518]: 2025-10-02 12:29:42.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:42 np0005466030 nova_compute[230518]: 2025-10-02 12:29:42.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:43 np0005466030 podman[259277]: 2025-10-02 12:29:43.838087168 +0000 UTC m=+0.078184511 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:29:43 np0005466030 podman[259276]: 2025-10-02 12:29:43.883626511 +0000 UTC m=+0.126765819 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:29:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.085 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.085 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:44.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:44.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4124496307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.578 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.662 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.662 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.819 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.820 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4402MB free_disk=20.85568618774414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.821 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.822 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 55bd545d-c449-4749-a3f1-b04f0f37e06e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:29:44 np0005466030 nova_compute[230518]: 2025-10-02 12:29:44.929 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:29:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:29:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4172126751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:29:45 np0005466030 nova_compute[230518]: 2025-10-02 12:29:45.349 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:29:45 np0005466030 nova_compute[230518]: 2025-10-02 12:29:45.356 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:29:45 np0005466030 nova_compute[230518]: 2025-10-02 12:29:45.393 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:29:45 np0005466030 nova_compute[230518]: 2025-10-02 12:29:45.457 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:29:45 np0005466030 nova_compute[230518]: 2025-10-02 12:29:45.458 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:46.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:46 np0005466030 nova_compute[230518]: 2025-10-02 12:29:46.454 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:47 np0005466030 nova_compute[230518]: 2025-10-02 12:29:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:47 np0005466030 nova_compute[230518]: 2025-10-02 12:29:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:47 np0005466030 nova_compute[230518]: 2025-10-02 12:29:47.520 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408172.5193784, 91d6698b-e355-4477-8680-f469bfd285a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:29:47 np0005466030 nova_compute[230518]: 2025-10-02 12:29:47.520 2 INFO nova.compute.manager [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:29:47 np0005466030 nova_compute[230518]: 2025-10-02 12:29:47.557 2 DEBUG nova.compute.manager [None req-5bd27d72-f72e-4ccc-884f-58c73a5c0753 - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:29:47 np0005466030 nova_compute[230518]: 2025-10-02 12:29:47.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:47 np0005466030 nova_compute[230518]: 2025-10-02 12:29:47.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:48 np0005466030 nova_compute[230518]: 2025-10-02 12:29:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:48 np0005466030 nova_compute[230518]: 2025-10-02 12:29:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:48 np0005466030 nova_compute[230518]: 2025-10-02 12:29:48.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:29:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:48.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Oct  2 08:29:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Oct  2 08:29:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:50.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:50.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:50.837 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:29:50 np0005466030 nova_compute[230518]: 2025-10-02 12:29:50.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:50.838 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:29:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Oct  2 08:29:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:52.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:52.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:52 np0005466030 nova_compute[230518]: 2025-10-02 12:29:52.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:52 np0005466030 nova_compute[230518]: 2025-10-02 12:29:52.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:53 np0005466030 nova_compute[230518]: 2025-10-02 12:29:53.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Oct  2 08:29:54 np0005466030 nova_compute[230518]: 2025-10-02 12:29:54.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:54 np0005466030 nova_compute[230518]: 2025-10-02 12:29:54.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:54.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:54.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:54 np0005466030 podman[259366]: 2025-10-02 12:29:54.812612594 +0000 UTC m=+0.060641620 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:29:54 np0005466030 podman[259367]: 2025-10-02 12:29:54.832603033 +0000 UTC m=+0.076381905 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:29:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.279 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.279 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.280 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.280 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:56.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.428 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.792 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.816 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:56 np0005466030 nova_compute[230518]: 2025-10-02 12:29:56.816 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.105 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.106 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.106 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "55bd545d-c449-4749-a3f1-b04f0f37e06e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.107 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.107 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.109 2 INFO nova.compute.manager [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Terminating instance#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.111 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.111 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquired lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.111 2 DEBUG nova.network.neutron [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.253 2 DEBUG nova.network.neutron [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.714 2 DEBUG nova.network.neutron [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.735 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Releasing lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.736 2 DEBUG nova.compute.manager [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:29:57 np0005466030 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct  2 08:29:57 np0005466030 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000040.scope: Consumed 15.041s CPU time.
Oct  2 08:29:57 np0005466030 systemd-machined[188247]: Machine qemu-33-instance-00000040 terminated.
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.960 2 INFO nova.virt.libvirt.driver [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance destroyed successfully.#033[00m
Oct  2 08:29:57 np0005466030 nova_compute[230518]: 2025-10-02 12:29:57.960 2 DEBUG nova.objects.instance [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'resources' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:29:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:29:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:29:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:29:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:29:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:58.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:29:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:29:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:29:58.841 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:29:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.101 2 INFO nova.virt.libvirt.driver [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deleting instance files /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e_del#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.101 2 INFO nova.virt.libvirt.driver [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deletion of /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e_del complete#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.150 2 INFO nova.compute.manager [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 2.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.151 2 DEBUG oslo.service.loopingcall [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.151 2 DEBUG nova.compute.manager [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.151 2 DEBUG nova.network.neutron [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:30:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:00.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 08:30:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.432 2 DEBUG nova.network.neutron [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.451 2 DEBUG nova.network.neutron [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.465 2 INFO nova.compute.manager [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 0.31 seconds to deallocate network for instance.#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.524 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.525 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:00 np0005466030 nova_compute[230518]: 2025-10-02 12:30:00.605 2 DEBUG oslo_concurrency.processutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/653846976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:01 np0005466030 nova_compute[230518]: 2025-10-02 12:30:01.090 2 DEBUG oslo_concurrency.processutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:01 np0005466030 nova_compute[230518]: 2025-10-02 12:30:01.098 2 DEBUG nova.compute.provider_tree [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:01 np0005466030 nova_compute[230518]: 2025-10-02 12:30:01.119 2 DEBUG nova.scheduler.client.report [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:01 np0005466030 nova_compute[230518]: 2025-10-02 12:30:01.140 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:01 np0005466030 nova_compute[230518]: 2025-10-02 12:30:01.166 2 INFO nova.scheduler.client.report [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Deleted allocations for instance 55bd545d-c449-4749-a3f1-b04f0f37e06e#033[00m
Oct  2 08:30:01 np0005466030 nova_compute[230518]: 2025-10-02 12:30:01.254 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:02.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:02 np0005466030 nova_compute[230518]: 2025-10-02 12:30:02.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Oct  2 08:30:02 np0005466030 nova_compute[230518]: 2025-10-02 12:30:02.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:04.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Oct  2 08:30:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:06.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:06.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:07 np0005466030 nova_compute[230518]: 2025-10-02 12:30:07.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:07 np0005466030 nova_compute[230518]: 2025-10-02 12:30:07.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:08.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:30:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:30:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Oct  2 08:30:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:10.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:12.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:12.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Oct  2 08:30:12 np0005466030 nova_compute[230518]: 2025-10-02 12:30:12.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:12 np0005466030 nova_compute[230518]: 2025-10-02 12:30:12.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:12 np0005466030 nova_compute[230518]: 2025-10-02 12:30:12.960 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408197.957015, 55bd545d-c449-4749-a3f1-b04f0f37e06e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:12 np0005466030 nova_compute[230518]: 2025-10-02 12:30:12.961 2 INFO nova.compute.manager [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:30:12 np0005466030 nova_compute[230518]: 2025-10-02 12:30:12.981 2 DEBUG nova.compute.manager [None req-25bb732b-7918-44d7-9fc1-056d2940d766 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:14.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:14 np0005466030 podman[259454]: 2025-10-02 12:30:14.830540378 +0000 UTC m=+0.079073229 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 08:30:14 np0005466030 podman[259453]: 2025-10-02 12:30:14.878856329 +0000 UTC m=+0.123345563 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:30:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:16.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:16 np0005466030 nova_compute[230518]: 2025-10-02 12:30:16.765 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:16 np0005466030 nova_compute[230518]: 2025-10-02 12:30:16.766 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:16 np0005466030 nova_compute[230518]: 2025-10-02 12:30:16.804 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:30:16 np0005466030 nova_compute[230518]: 2025-10-02 12:30:16.935 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:16 np0005466030 nova_compute[230518]: 2025-10-02 12:30:16.936 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:16 np0005466030 nova_compute[230518]: 2025-10-02 12:30:16.950 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:16 np0005466030 nova_compute[230518]: 2025-10-02 12:30:16.951 2 INFO nova.compute.claims [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.062 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2337983430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.516 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.522 2 DEBUG nova.compute.provider_tree [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.541 2 DEBUG nova.scheduler.client.report [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.574 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.574 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.669 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.670 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.688 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.714 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.928 2 DEBUG nova.policy [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '045de4bc70204ae8b6975513839061d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '546222ddef05450d9aeb91e721403b5b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.937 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.939 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.939 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Creating image(s)#033[00m
Oct  2 08:30:17 np0005466030 nova_compute[230518]: 2025-10-02 12:30:17.979 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.024 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.055 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.059 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.152 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.154 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.156 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.156 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.211 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.216 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 60c7cce3-3461-4cad-a135-46f35e607214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:18.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:18.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:18 np0005466030 nova_compute[230518]: 2025-10-02 12:30:18.637 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Successfully created port: a5294afe-68a4-4f22-b51c-725ac6164e9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.647 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Successfully updated port: a5294afe-68a4-4f22-b51c-725ac6164e9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.669 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.669 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquired lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.669 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.753 2 DEBUG nova.compute.manager [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-changed-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.753 2 DEBUG nova.compute.manager [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Refreshing instance network info cache due to event network-changed-a5294afe-68a4-4f22-b51c-725ac6164e9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.753 2 DEBUG oslo_concurrency.lockutils [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:19 np0005466030 nova_compute[230518]: 2025-10-02 12:30:19.811 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:20.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:20 np0005466030 nova_compute[230518]: 2025-10-02 12:30:20.461 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updating instance_info_cache with network_info: [{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:20 np0005466030 nova_compute[230518]: 2025-10-02 12:30:20.501 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Releasing lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:20 np0005466030 nova_compute[230518]: 2025-10-02 12:30:20.501 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance network_info: |[{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:30:20 np0005466030 nova_compute[230518]: 2025-10-02 12:30:20.502 2 DEBUG oslo_concurrency.lockutils [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:20 np0005466030 nova_compute[230518]: 2025-10-02 12:30:20.502 2 DEBUG nova.network.neutron [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Refreshing network info cache for port a5294afe-68a4-4f22-b51c-725ac6164e9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:30:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Oct  2 08:30:21 np0005466030 nova_compute[230518]: 2025-10-02 12:30:21.817 2 DEBUG nova.network.neutron [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updated VIF entry in instance network info cache for port a5294afe-68a4-4f22-b51c-725ac6164e9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:30:21 np0005466030 nova_compute[230518]: 2025-10-02 12:30:21.818 2 DEBUG nova.network.neutron [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updating instance_info_cache with network_info: [{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:21 np0005466030 nova_compute[230518]: 2025-10-02 12:30:21.851 2 DEBUG oslo_concurrency.lockutils [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:22.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:22 np0005466030 nova_compute[230518]: 2025-10-02 12:30:22.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:22 np0005466030 nova_compute[230518]: 2025-10-02 12:30:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:23 np0005466030 podman[259788]: 2025-10-02 12:30:23.408550953 +0000 UTC m=+0.081830025 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 08:30:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:23 np0005466030 podman[259788]: 2025-10-02 12:30:23.52569264 +0000 UTC m=+0.198971712 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  2 08:30:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:24.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:24.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:25 np0005466030 podman[259909]: 2025-10-02 12:30:25.85487312 +0000 UTC m=+0.095971591 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:30:25 np0005466030 podman[259910]: 2025-10-02 12:30:25.859812985 +0000 UTC m=+0.100914746 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:30:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:25.927 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:25.927 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:25.928 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:30:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:26.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:30:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:27 np0005466030 nova_compute[230518]: 2025-10-02 12:30:27.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:27 np0005466030 nova_compute[230518]: 2025-10-02 12:30:27.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:28 np0005466030 nova_compute[230518]: 2025-10-02 12:30:28.042 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 60c7cce3-3461-4cad-a135-46f35e607214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 9.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:28 np0005466030 nova_compute[230518]: 2025-10-02 12:30:28.116 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] resizing rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:30:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:28.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:28.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.613 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'migration_context' on Instance uuid 60c7cce3-3461-4cad-a135-46f35e607214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.628 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.628 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Ensure instance console log exists: /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.629 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.630 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.630 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.634 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start _get_guest_xml network_info=[{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.640 2 WARNING nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.646 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.647 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.651 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.652 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.654 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.654 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.655 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.655 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.656 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.656 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.657 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.657 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.658 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.658 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.658 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.659 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:30:31 np0005466030 nova_compute[230518]: 2025-10-02 12:30:31.664 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3003137044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.151 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.191 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.196 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:32.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/750541670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.822 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.825 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-2',id=72,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:17Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=60c7cce3-3461-4cad-a135-46f35e607214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.826 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.828 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.830 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 60c7cce3-3461-4cad-a135-46f35e607214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.864 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <uuid>60c7cce3-3461-4cad-a135-46f35e607214</uuid>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <name>instance-00000048</name>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <nova:name>tempest-ListServersNegativeTestJSON-server-978789831-2</nova:name>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:30:31</nova:creationTime>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:user uuid="045de4bc70204ae8b6975513839061d8">tempest-ListServersNegativeTestJSON-400261674-project-member</nova:user>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:project uuid="546222ddef05450d9aeb91e721403b5b">tempest-ListServersNegativeTestJSON-400261674</nova:project>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <nova:port uuid="a5294afe-68a4-4f22-b51c-725ac6164e9f">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <entry name="serial">60c7cce3-3461-4cad-a135-46f35e607214</entry>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <entry name="uuid">60c7cce3-3461-4cad-a135-46f35e607214</entry>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/60c7cce3-3461-4cad-a135-46f35e607214_disk">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/60c7cce3-3461-4cad-a135-46f35e607214_disk.config">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:c3:ab:65"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <target dev="tapa5294afe-68"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/console.log" append="off"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:30:32 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:30:32 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:30:32 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:30:32 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.866 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Preparing to wait for external event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.867 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.867 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.868 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.869 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-2',id=72,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:17Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=60c7cce3-3461-4cad-a135-46f35e607214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.869 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.870 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.870 2 DEBUG os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.871 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.872 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5294afe-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.878 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5294afe-68, col_values=(('external_ids', {'iface-id': 'a5294afe-68a4-4f22-b51c-725ac6164e9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:ab:65', 'vm-uuid': '60c7cce3-3461-4cad-a135-46f35e607214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:32 np0005466030 NetworkManager[44960]: <info>  [1759408232.8811] manager: (tapa5294afe-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.888 2 INFO os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68')#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.959 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.961 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.962 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No VIF found with MAC fa:16:3e:c3:ab:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:30:32 np0005466030 nova_compute[230518]: 2025-10-02 12:30:32.963 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Using config drive#033[00m
Oct  2 08:30:33 np0005466030 nova_compute[230518]: 2025-10-02 12:30:33.007 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:33 np0005466030 nova_compute[230518]: 2025-10-02 12:30:33.604 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Creating config drive at /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config#033[00m
Oct  2 08:30:33 np0005466030 nova_compute[230518]: 2025-10-02 12:30:33.613 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2opik9yc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:33 np0005466030 nova_compute[230518]: 2025-10-02 12:30:33.769 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2opik9yc" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:33 np0005466030 nova_compute[230518]: 2025-10-02 12:30:33.797 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:33 np0005466030 nova_compute[230518]: 2025-10-02 12:30:33.801 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config 60c7cce3-3461-4cad-a135-46f35e607214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:30:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.552 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config 60c7cce3-3461-4cad-a135-46f35e607214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.553 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deleting local config drive /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config because it was imported into RBD.#033[00m
Oct  2 08:30:35 np0005466030 kernel: tapa5294afe-68: entered promiscuous mode
Oct  2 08:30:35 np0005466030 NetworkManager[44960]: <info>  [1759408235.6248] manager: (tapa5294afe-68): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct  2 08:30:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:35Z|00307|binding|INFO|Claiming lport a5294afe-68a4-4f22-b51c-725ac6164e9f for this chassis.
Oct  2 08:30:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:35Z|00308|binding|INFO|a5294afe-68a4-4f22-b51c-725ac6164e9f: Claiming fa:16:3e:c3:ab:65 10.100.0.8
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.641 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:ab:65 10.100.0.8'], port_security=['fa:16:3e:c3:ab:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '60c7cce3-3461-4cad-a135-46f35e607214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a5294afe-68a4-4f22-b51c-725ac6164e9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.643 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a5294afe-68a4-4f22-b51c-725ac6164e9f in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 bound to our chassis#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.647 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0f24c0d-e50a-47b1-8faa-15e38342da63#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.663 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f998613a-0aaf-403e-b9f0-a7e67b7c55f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.664 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0f24c0d-e1 in ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:30:35 np0005466030 systemd-udevd[260288]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.667 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0f24c0d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.668 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d35d3eb9-c4e9-43a3-982e-11e64f5f7a17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.669 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4cef17-1b4a-4aaf-9bfd-b63833eaae12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 systemd-machined[188247]: New machine qemu-36-instance-00000048.
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.684 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[27797add-e2a7-45c3-916c-f5e030d55ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 NetworkManager[44960]: <info>  [1759408235.6879] device (tapa5294afe-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:30:35 np0005466030 systemd[1]: Started Virtual Machine qemu-36-instance-00000048.
Oct  2 08:30:35 np0005466030 NetworkManager[44960]: <info>  [1759408235.6908] device (tapa5294afe-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.702 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1de3c155-0252-4a4f-98bc-7df52f47e747]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:35Z|00309|binding|INFO|Setting lport a5294afe-68a4-4f22-b51c-725ac6164e9f ovn-installed in OVS
Oct  2 08:30:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:35Z|00310|binding|INFO|Setting lport a5294afe-68a4-4f22-b51c-725ac6164e9f up in Southbound
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.732 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c83ffce6-e11d-4e2d-832b-ca365cab9e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 NetworkManager[44960]: <info>  [1759408235.7375] manager: (tapf0f24c0d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.736 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2feb2783-f8f1-406d-a1e6-e849b6b332e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.765 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc1fdf5-a0b4-4d9d-9e26-33035f263749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.767 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbb14f1-cc4e-4a92-9e93-2b7ffc89af9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 NetworkManager[44960]: <info>  [1759408235.7937] device (tapf0f24c0d-e0): carrier: link connected
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.797 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a05f670b-1753-483c-8471-6bc9fea371a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.813 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[20fd004e-f591-4834-b504-ca3a9fca872b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609935, 'reachable_time': 16042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260321, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.827 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6691d624-6532-471c-9c8f-486ec42ed481]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:5ed7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609935, 'tstamp': 609935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260322, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.841 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18a28e3e-5088-458d-9f6b-f4a6ed482f51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609935, 'reachable_time': 16042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260323, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.875 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7049614d-ef13-4ea6-aa4a-025abe65c5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.931 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3434f0eb-677a-4625-937a-659ebfcfab75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.932 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.932 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.933 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f24c0d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:35 np0005466030 kernel: tapf0f24c0d-e0: entered promiscuous mode
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 NetworkManager[44960]: <info>  [1759408235.9373] manager: (tapf0f24c0d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.937 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0f24c0d-e0, col_values=(('external_ids', {'iface-id': 'aa017360-5737-4ad9-a150-2ba1122b7ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.940 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.940 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56a77635-df2e-4d32-9cc8-c836dadb43cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:35Z|00311|binding|INFO|Releasing lport aa017360-5737-4ad9-a150-2ba1122b7ea5 from this chassis (sb_readonly=0)
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.941 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f0f24c0d-e50a-47b1-8faa-15e38342da63
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f0f24c0d-e50a-47b1-8faa-15e38342da63
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:30:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.941 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'env', 'PROCESS_TAG=haproxy-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0f24c0d-e50a-47b1-8faa-15e38342da63.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:30:35 np0005466030 nova_compute[230518]: 2025-10-02 12:30:35.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:30:36 np0005466030 nova_compute[230518]: 2025-10-02 12:30:36.059 2 DEBUG nova.compute.manager [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:36 np0005466030 nova_compute[230518]: 2025-10-02 12:30:36.060 2 DEBUG oslo_concurrency.lockutils [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:36 np0005466030 nova_compute[230518]: 2025-10-02 12:30:36.060 2 DEBUG oslo_concurrency.lockutils [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:36 np0005466030 nova_compute[230518]: 2025-10-02 12:30:36.061 2 DEBUG oslo_concurrency.lockutils [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:36 np0005466030 nova_compute[230518]: 2025-10-02 12:30:36.061 2 DEBUG nova.compute.manager [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Processing event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:30:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:36 np0005466030 podman[260373]: 2025-10-02 12:30:36.279698505 +0000 UTC m=+0.031598525 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:30:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:36 np0005466030 podman[260373]: 2025-10-02 12:30:36.466027908 +0000 UTC m=+0.217927898 container create bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:30:36 np0005466030 systemd[1]: Started libpod-conmon-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad.scope.
Oct  2 08:30:36 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:30:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f7d9e4444998b8922733136aa4f6bb49ad93e881eff99c8c8d9eeefe99b35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:30:36 np0005466030 podman[260373]: 2025-10-02 12:30:36.56969856 +0000 UTC m=+0.321598580 container init bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:30:36 np0005466030 podman[260373]: 2025-10-02 12:30:36.574925375 +0000 UTC m=+0.326825365 container start bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:30:36 np0005466030 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : New worker (260412) forked
Oct  2 08:30:36 np0005466030 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : Loading success.
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.244 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408237.2439508, 60c7cce3-3461-4cad-a135-46f35e607214 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.245 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Started (Lifecycle Event)#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.248 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.252 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.257 2 INFO nova.virt.libvirt.driver [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance spawned successfully.#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.257 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.277 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.285 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.291 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.292 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.293 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.294 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.295 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.296 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.308 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.309 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408237.2440524, 60c7cce3-3461-4cad-a135-46f35e607214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.309 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.336 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.340 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408237.2514226, 60c7cce3-3461-4cad-a135-46f35e607214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.341 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.362 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 19.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.362 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.368 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.377 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.413 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.437 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 20.54 seconds to build instance.#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.458 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:37 np0005466030 nova_compute[230518]: 2025-10-02 12:30:37.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:38 np0005466030 nova_compute[230518]: 2025-10-02 12:30:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:38 np0005466030 nova_compute[230518]: 2025-10-02 12:30:38.185 2 DEBUG nova.compute.manager [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:38 np0005466030 nova_compute[230518]: 2025-10-02 12:30:38.186 2 DEBUG oslo_concurrency.lockutils [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:38 np0005466030 nova_compute[230518]: 2025-10-02 12:30:38.186 2 DEBUG oslo_concurrency.lockutils [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:38 np0005466030 nova_compute[230518]: 2025-10-02 12:30:38.187 2 DEBUG oslo_concurrency.lockutils [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:38 np0005466030 nova_compute[230518]: 2025-10-02 12:30:38.187 2 DEBUG nova.compute.manager [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] No waiting events found dispatching network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:38 np0005466030 nova_compute[230518]: 2025-10-02 12:30:38.188 2 WARNING nova.compute.manager [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received unexpected event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:30:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 08:30:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 08:30:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:30:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:40.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:30:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:42 np0005466030 nova_compute[230518]: 2025-10-02 12:30:42.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:42 np0005466030 nova_compute[230518]: 2025-10-02 12:30:42.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.203 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.204 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.256 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:30:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:44.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:44.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.435 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.436 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.444 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.445 2 INFO nova.compute.claims [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.646 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.679 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.679 2 DEBUG nova.compute.provider_tree [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.700 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:30:44 np0005466030 nova_compute[230518]: 2025-10-02 12:30:44.814 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.067 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.098 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.143 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.296976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245297029, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2473, "num_deletes": 264, "total_data_size": 5654713, "memory_usage": 5743656, "flush_reason": "Manual Compaction"}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245323010, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3707008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36290, "largest_seqno": 38758, "table_properties": {"data_size": 3696471, "index_size": 6775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22145, "raw_average_key_size": 21, "raw_value_size": 3675363, "raw_average_value_size": 3487, "num_data_blocks": 291, "num_entries": 1054, "num_filter_entries": 1054, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408059, "oldest_key_time": 1759408059, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 26090 microseconds, and 12709 cpu microseconds.
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.323068) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3707008 bytes OK
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.323094) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324789) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324809) EVENT_LOG_v1 {"time_micros": 1759408245324803, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5643510, prev total WAL file size 5643510, number of live WAL files 2.
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.326620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3620KB)], [69(8322KB)]
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245326692, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 12229095, "oldest_snapshot_seqno": -1}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6410 keys, 12064410 bytes, temperature: kUnknown
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245388854, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12064410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12018477, "index_size": 28799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 163825, "raw_average_key_size": 25, "raw_value_size": 11900554, "raw_average_value_size": 1856, "num_data_blocks": 1163, "num_entries": 6410, "num_filter_entries": 6410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.389056) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12064410 bytes
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.390821) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 193.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.3) OK, records in: 6952, records dropped: 542 output_compression: NoCompression
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.390841) EVENT_LOG_v1 {"time_micros": 1759408245390832, "job": 42, "event": "compaction_finished", "compaction_time_micros": 62218, "compaction_time_cpu_micros": 29118, "output_level": 6, "num_output_files": 1, "total_output_size": 12064410, "num_input_records": 6952, "num_output_records": 6410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245391642, "job": 42, "event": "table_file_deletion", "file_number": 71}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245393290, "job": 42, "event": "table_file_deletion", "file_number": 69}
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.326462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/829913293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.572 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.579 2 DEBUG nova.compute.provider_tree [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.650 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.711 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.712 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.717 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.717 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.718 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.718 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.810 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.811 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:30:45 np0005466030 podman[260452]: 2025-10-02 12:30:45.833561407 +0000 UTC m=+0.069718244 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.857 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.880 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:30:45 np0005466030 podman[260450]: 2025-10-02 12:30:45.901411892 +0000 UTC m=+0.140035168 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.986 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.989 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:30:45 np0005466030 nova_compute[230518]: 2025-10-02 12:30:45.990 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Creating image(s)#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.025 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.061 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.099 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.106 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.146 2 DEBUG nova.policy [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eed4fdf8b49f41bfb982bc858fa76bef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a70008e0fc32481f8ed89060220b28d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:30:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/737162632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.190 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.191 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.192 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.193 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.228 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.235 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:46.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.390 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.390 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:30:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:46.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.625 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.627 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4391MB free_disk=20.863059997558594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.627 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.627 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.683 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.774 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] resizing rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.886 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 60c7cce3-3461-4cad-a135-46f35e607214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.886 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 86aa4a20-96fe-4862-a0c5-04ad92e40f1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.897 2 DEBUG nova.objects.instance [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.930 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.931 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Ensure instance console log exists: /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.931 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.932 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.932 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:46 np0005466030 nova_compute[230518]: 2025-10-02 12:30:46.992 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.247 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Successfully created port: 4a337dd1-6b9e-4a79-893f-de7400180a58 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:30:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:30:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4286409731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.480 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.485 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.594 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.643 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.644 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.718 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.719 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.719 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.719 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.720 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.721 2 INFO nova.compute.manager [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Terminating instance#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.722 2 DEBUG nova.compute.manager [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466030 kernel: tapa5294afe-68 (unregistering): left promiscuous mode
Oct  2 08:30:47 np0005466030 NetworkManager[44960]: <info>  [1759408247.8073] device (tapa5294afe-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:30:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:47Z|00312|binding|INFO|Releasing lport a5294afe-68a4-4f22-b51c-725ac6164e9f from this chassis (sb_readonly=0)
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:47Z|00313|binding|INFO|Setting lport a5294afe-68a4-4f22-b51c-725ac6164e9f down in Southbound
Oct  2 08:30:47 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:47Z|00314|binding|INFO|Removing iface tapa5294afe-68 ovn-installed in OVS
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466030 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct  2 08:30:47 np0005466030 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000048.scope: Consumed 11.654s CPU time.
Oct  2 08:30:47 np0005466030 systemd-machined[188247]: Machine qemu-36-instance-00000048 terminated.
Oct  2 08:30:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.875 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:ab:65 10.100.0.8'], port_security=['fa:16:3e:c3:ab:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '60c7cce3-3461-4cad-a135-46f35e607214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a5294afe-68a4-4f22-b51c-725ac6164e9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.877 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a5294afe-68a4-4f22-b51c-725ac6164e9f in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 unbound from our chassis#033[00m
Oct  2 08:30:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.879 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0f24c0d-e50a-47b1-8faa-15e38342da63, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:30:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4de1a7-fbdb-4f9a-83dd-fb9be3d3038c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.881 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 namespace which is not needed anymore#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.962 2 INFO nova.virt.libvirt.driver [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance destroyed successfully.#033[00m
Oct  2 08:30:47 np0005466030 nova_compute[230518]: 2025-10-02 12:30:47.963 2 DEBUG nova.objects.instance [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'resources' on Instance uuid 60c7cce3-3461-4cad-a135-46f35e607214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.023 2 DEBUG nova.virt.libvirt.vif [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-2',id=72,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:30:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:37Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=60c7cce3-3461-4cad-a135-46f35e607214,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.024 2 DEBUG nova.network.os_vif_util [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.025 2 DEBUG nova.network.os_vif_util [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.026 2 DEBUG os_vif [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5294afe-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.038 2 INFO os_vif [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68')#033[00m
Oct  2 08:30:48 np0005466030 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : haproxy version is 2.8.14-c23fe91
Oct  2 08:30:48 np0005466030 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : path to executable is /usr/sbin/haproxy
Oct  2 08:30:48 np0005466030 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [ALERT]    (260410) : Current worker (260412) exited with code 143 (Terminated)
Oct  2 08:30:48 np0005466030 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [WARNING]  (260410) : All workers exited. Exiting... (0)
Oct  2 08:30:48 np0005466030 systemd[1]: libpod-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad.scope: Deactivated successfully.
Oct  2 08:30:48 np0005466030 podman[260788]: 2025-10-02 12:30:48.090126782 +0000 UTC m=+0.070713586 container died bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:30:48 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad-userdata-shm.mount: Deactivated successfully.
Oct  2 08:30:48 np0005466030 systemd[1]: var-lib-containers-storage-overlay-187f7d9e4444998b8922733136aa4f6bb49ad93e881eff99c8c8d9eeefe99b35-merged.mount: Deactivated successfully.
Oct  2 08:30:48 np0005466030 podman[260788]: 2025-10-02 12:30:48.144347318 +0000 UTC m=+0.124934092 container cleanup bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:30:48 np0005466030 systemd[1]: libpod-conmon-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad.scope: Deactivated successfully.
Oct  2 08:30:48 np0005466030 podman[260836]: 2025-10-02 12:30:48.22577081 +0000 UTC m=+0.053376110 container remove bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.232 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af34291d-588a-46bb-9218-d85f407f8238]: (4, ('Thu Oct  2 12:30:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 (bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad)\nbee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad\nThu Oct  2 12:30:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 (bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad)\nbee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.234 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b56605a9-a5da-4e17-9be4-6f605f0b7b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.235 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:48.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:48 np0005466030 kernel: tapf0f24c0d-e0: left promiscuous mode
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.342 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57ad28c8-9651-4814-adca-1dece810835d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.363 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7140fa04-dcfb-47e4-b099-c50bb5ca0a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.364 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d31b1b38-91d8-402b-927e-3ab0e0ceae2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.381 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33cd9c14-b4b7-4bf9-a603-e9d78e980021]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609928, 'reachable_time': 32529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260851, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466030 systemd[1]: run-netns-ovnmeta\x2df0f24c0d\x2de50a\x2d47b1\x2d8faa\x2d15e38342da63.mount: Deactivated successfully.
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.383 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:30:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.383 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fb905d-97d0-4832-9ebe-6e9bea2f81eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.402 2 DEBUG nova.compute.manager [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-unplugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.403 2 DEBUG oslo_concurrency.lockutils [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.403 2 DEBUG oslo_concurrency.lockutils [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.403 2 DEBUG oslo_concurrency.lockutils [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.404 2 DEBUG nova.compute.manager [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] No waiting events found dispatching network-vif-unplugged-a5294afe-68a4-4f22-b51c-725ac6164e9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.404 2 DEBUG nova.compute.manager [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-unplugged-a5294afe-68a4-4f22-b51c-725ac6164e9f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.467109) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248467342, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 319, "num_deletes": 251, "total_data_size": 197299, "memory_usage": 204744, "flush_reason": "Manual Compaction"}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248470388, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 129947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38763, "largest_seqno": 39077, "table_properties": {"data_size": 127837, "index_size": 274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5344, "raw_average_key_size": 18, "raw_value_size": 123706, "raw_average_value_size": 434, "num_data_blocks": 11, "num_entries": 285, "num_filter_entries": 285, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408246, "oldest_key_time": 1759408246, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3303 microseconds, and 1080 cpu microseconds.
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470414) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 129947 bytes OK
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470431) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471570) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471582) EVENT_LOG_v1 {"time_micros": 1759408248471578, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 195007, prev total WAL file size 195007, number of live WAL files 2.
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.472024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(126KB)], [72(11MB)]
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248472053, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 12194357, "oldest_snapshot_seqno": -1}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6183 keys, 10187978 bytes, temperature: kUnknown
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248538664, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 10187978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10145397, "index_size": 26023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 159817, "raw_average_key_size": 25, "raw_value_size": 10033137, "raw_average_value_size": 1622, "num_data_blocks": 1038, "num_entries": 6183, "num_filter_entries": 6183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.538945) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10187978 bytes
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.541249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 152.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(172.2) write-amplify(78.4) OK, records in: 6695, records dropped: 512 output_compression: NoCompression
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.541270) EVENT_LOG_v1 {"time_micros": 1759408248541260, "job": 44, "event": "compaction_finished", "compaction_time_micros": 66750, "compaction_time_cpu_micros": 21234, "output_level": 6, "num_output_files": 1, "total_output_size": 10187978, "num_input_records": 6695, "num_output_records": 6183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248541417, "job": 44, "event": "table_file_deletion", "file_number": 74}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248543896, "job": 44, "event": "table_file_deletion", "file_number": 72}
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.957 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Successfully updated port: 4a337dd1-6b9e-4a79-893f-de7400180a58 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.991 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.992 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:48 np0005466030 nova_compute[230518]: 2025-10-02 12:30:48.992 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.154 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.589 2 INFO nova.virt.libvirt.driver [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deleting instance files /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214_del#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.591 2 INFO nova.virt.libvirt.driver [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deletion of /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214_del complete#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.596 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.597 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.597 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.598 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.598 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.652 2 INFO nova.compute.manager [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 1.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.653 2 DEBUG oslo.service.loopingcall [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.653 2 DEBUG nova.compute.manager [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:30:49 np0005466030 nova_compute[230518]: 2025-10-02 12:30:49.653 2 DEBUG nova.network.neutron [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:30:50 np0005466030 nova_compute[230518]: 2025-10-02 12:30:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:30:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:50.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:30:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:50.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.000 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] No waiting events found dispatching network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.002 2 WARNING nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received unexpected event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.002 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.002 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing instance network info cache due to event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.002 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:51.286 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:51.290 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.533 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance network_info: |[{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG nova.network.neutron [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.567 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start _get_guest_xml network_info=[{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.573 2 WARNING nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.592 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.593 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.597 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.598 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.599 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.599 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.600 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.600 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.600 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.602 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.602 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.602 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.605 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.667 2 DEBUG nova.network.neutron [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.690 2 INFO nova.compute.manager [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 2.04 seconds to deallocate network for instance.#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.752 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.753 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:51 np0005466030 nova_compute[230518]: 2025-10-02 12:30:51.875 2 DEBUG oslo_concurrency.processutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/753036913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.191 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.217 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.221 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:30:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4246083289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:30:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:52.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.347 2 DEBUG oslo_concurrency.processutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.355 2 DEBUG nova.compute.provider_tree [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.384 2 DEBUG nova.scheduler.client.report [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:30:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.412 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.455 2 INFO nova.scheduler.client.report [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Deleted allocations for instance 60c7cce3-3461-4cad-a135-46f35e607214#033[00m
Oct  2 08:30:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:30:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1154353251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.665 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.667 2 DEBUG nova.virt.libvirt.vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-606434201',display_name='tempest-ServersTestJSON-server-606434201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-606434201',id=74,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOSVA0UqkFnW7YuNQRsY27cJ9a2M1giyjrSsiXcPBlr5myx9iDFttDeAnJV4hFOON30Ktzf3cWWwY2KcF8NnunVbyoieS3+bfZniZlgOmGbNNyoaXEKseU0cJsJiwxrpQ==',key_name='tempest-keypair-2131917532',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a70008e0fc32481f8ed89060220b28d7',ramdisk_id='',reservation_id='r-0450lgko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-128626597',owner_user_name='tempest-ServersTestJSON-128626597-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eed4fdf8b49f41bfb982bc858fa76bef',uuid=86aa4a20-96fe-4862-a0c5-04ad92e40f1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.667 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converting VIF {"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.669 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.671 2 DEBUG nova.objects.instance [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.729 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <uuid>86aa4a20-96fe-4862-a0c5-04ad92e40f1b</uuid>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <name>instance-0000004a</name>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersTestJSON-server-606434201</nova:name>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:30:51</nova:creationTime>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:user uuid="eed4fdf8b49f41bfb982bc858fa76bef">tempest-ServersTestJSON-128626597-project-member</nova:user>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:project uuid="a70008e0fc32481f8ed89060220b28d7">tempest-ServersTestJSON-128626597</nova:project>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <nova:port uuid="4a337dd1-6b9e-4a79-893f-de7400180a58">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <entry name="serial">86aa4a20-96fe-4862-a0c5-04ad92e40f1b</entry>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <entry name="uuid">86aa4a20-96fe-4862-a0c5-04ad92e40f1b</entry>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:e1:a3:65"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <target dev="tap4a337dd1-6b"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/console.log" append="off"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:30:52 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:30:52 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:30:52 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:30:52 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.731 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Preparing to wait for external event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.732 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.732 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.733 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.734 2 DEBUG nova.virt.libvirt.vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-606434201',display_name='tempest-ServersTestJSON-server-606434201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-606434201',id=74,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOSVA0UqkFnW7YuNQRsY27cJ9a2M1giyjrSsiXcPBlr5myx9iDFttDeAnJV4hFOON30Ktzf3cWWwY2KcF8NnunVbyoieS3+bfZniZlgOmGbNNyoaXEKseU0cJsJiwxrpQ==',key_name='tempest-keypair-2131917532',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a70008e0fc32481f8ed89060220b28d7',ramdisk_id='',reservation_id='r-0450lgko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-128626597',owner_user_name='tempest-ServersTestJSON-128626597-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eed4fdf8b49f41bfb982bc858fa76bef',uuid=86aa4a20-96fe-4862-a0c5-04ad92e40f1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.735 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converting VIF {"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.736 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.737 2 DEBUG os_vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a337dd1-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.746 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a337dd1-6b, col_values=(('external_ids', {'iface-id': '4a337dd1-6b9e-4a79-893f-de7400180a58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:a3:65', 'vm-uuid': '86aa4a20-96fe-4862-a0c5-04ad92e40f1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:52 np0005466030 NetworkManager[44960]: <info>  [1759408252.7496] manager: (tap4a337dd1-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.756 2 INFO os_vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b')#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.768 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.840 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.841 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.841 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] No VIF found with MAC fa:16:3e:e1:a3:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.843 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Using config drive#033[00m
Oct  2 08:30:52 np0005466030 nova_compute[230518]: 2025-10-02 12:30:52.897 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:53 np0005466030 nova_compute[230518]: 2025-10-02 12:30:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:53 np0005466030 nova_compute[230518]: 2025-10-02 12:30:53.188 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-deleted-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:53 np0005466030 nova_compute[230518]: 2025-10-02 12:30:53.634 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Creating config drive at /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config#033[00m
Oct  2 08:30:53 np0005466030 nova_compute[230518]: 2025-10-02 12:30:53.646 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_ebhzg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:53 np0005466030 nova_compute[230518]: 2025-10-02 12:30:53.788 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_ebhzg" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:53 np0005466030 nova_compute[230518]: 2025-10-02 12:30:53.833 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:30:53 np0005466030 nova_compute[230518]: 2025-10-02 12:30:53.838 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.216 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.217 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deleting local config drive /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config because it was imported into RBD.#033[00m
Oct  2 08:30:54 np0005466030 kernel: tap4a337dd1-6b: entered promiscuous mode
Oct  2 08:30:54 np0005466030 NetworkManager[44960]: <info>  [1759408254.2901] manager: (tap4a337dd1-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Oct  2 08:30:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:54Z|00315|binding|INFO|Claiming lport 4a337dd1-6b9e-4a79-893f-de7400180a58 for this chassis.
Oct  2 08:30:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:54Z|00316|binding|INFO|4a337dd1-6b9e-4a79-893f-de7400180a58: Claiming fa:16:3e:e1:a3:65 10.100.0.10
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.305 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:a3:65 10.100.0.10'], port_security=['fa:16:3e:e1:a3:65 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '86aa4a20-96fe-4862-a0c5-04ad92e40f1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a70008e0fc32481f8ed89060220b28d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b5e4d5c-c242-45f9-85c8-d58de980e569', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4850e48d-a493-4bb4-bb29-020fdb04c9bf, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4a337dd1-6b9e-4a79-893f-de7400180a58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.307 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4a337dd1-6b9e-4a79-893f-de7400180a58 in datapath 41e6c621-c2f2-4fb3-a93d-8eda22ec0438 bound to our chassis#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.309 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41e6c621-c2f2-4fb3-a93d-8eda22ec0438#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.324 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd635af-4e68-48d1-8292-c599946494b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 systemd-udevd[261011]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.325 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41e6c621-c1 in ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.327 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41e6c621-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2f97c4-63f2-430d-89d3-4319d988a1e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.328 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[406ea6a5-d5a2-4dc4-a50d-3d53764a8731]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:54.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:54 np0005466030 systemd-machined[188247]: New machine qemu-37-instance-0000004a.
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.343 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[10352e60-c0ea-4fca-a11d-a008f41ba956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 NetworkManager[44960]: <info>  [1759408254.3500] device (tap4a337dd1-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:30:54 np0005466030 NetworkManager[44960]: <info>  [1759408254.3518] device (tap4a337dd1-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.373 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfbb0a3-f21a-42a6-a3e4-dabb483bcc65]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 systemd[1]: Started Virtual Machine qemu-37-instance-0000004a.
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:54Z|00317|binding|INFO|Setting lport 4a337dd1-6b9e-4a79-893f-de7400180a58 ovn-installed in OVS
Oct  2 08:30:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:54Z|00318|binding|INFO|Setting lport 4a337dd1-6b9e-4a79-893f-de7400180a58 up in Southbound
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.412 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[70a64de9-fbe7-4bc4-9974-824e3c54e46c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 NetworkManager[44960]: <info>  [1759408254.4222] manager: (tap41e6c621-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.421 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[608cbc54-143e-4ed5-aa97-f2b7c92ef056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.458 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[40e4dc89-896c-4a36-a26c-ab08d91d2b91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.462 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[762d7934-d0e2-4f55-8221-4ef818993123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 NetworkManager[44960]: <info>  [1759408254.5063] device (tap41e6c621-c0): carrier: link connected
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.515 2 DEBUG nova.network.neutron [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updated VIF entry in instance network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.516 2 DEBUG nova.network.neutron [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.516 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[141a481c-0c39-42a4-a21b-e4aded781caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[083b41f7-3cae-4339-83cf-59a13982db7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41e6c621-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:fa:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611806, 'reachable_time': 37356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261044, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.576 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bd3476-377c-4536-b39b-bdabac043c7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:fa62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611806, 'tstamp': 611806}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261045, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.585 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.608 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e930d68e-230a-4073-8b00-8d4d525d2d07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41e6c621-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:fa:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611806, 'reachable_time': 37356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261046, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.665 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44a8ab6e-6596-418a-81b1-0a7cbc254f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.762 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e81892a1-185a-4add-bab8-6d9268904fba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.765 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41e6c621-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.765 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.766 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41e6c621-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:54 np0005466030 NetworkManager[44960]: <info>  [1759408254.7700] manager: (tap41e6c621-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 kernel: tap41e6c621-c0: entered promiscuous mode
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.775 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41e6c621-c0, col_values=(('external_ids', {'iface-id': 'e11ef0cd-0cee-4792-8183-b5f339fd3fb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:54Z|00319|binding|INFO|Releasing lport e11ef0cd-0cee-4792-8183-b5f339fd3fb3 from this chassis (sb_readonly=0)
Oct  2 08:30:54 np0005466030 nova_compute[230518]: 2025-10-02 12:30:54.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.802 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0fae875f-e15f-44c2-ad05-9f936677538a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.804 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-41e6c621-c2f2-4fb3-a93d-8eda22ec0438
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.pid.haproxy
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 41e6c621-c2f2-4fb3-a93d-8eda22ec0438
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:30:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.804 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'env', 'PROCESS_TAG=haproxy-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:30:55 np0005466030 podman[261120]: 2025-10-02 12:30:55.206741613 +0000 UTC m=+0.056932913 container create 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.261 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:55 np0005466030 systemd[1]: Started libpod-conmon-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633.scope.
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.263 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.264 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.266 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.267 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Processing event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.267 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.268 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.269 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.269 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.270 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] No waiting events found dispatching network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:30:55 np0005466030 podman[261120]: 2025-10-02 12:30:55.180802456 +0000 UTC m=+0.030993746 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.271 2 WARNING nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received unexpected event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.276 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.277 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408255.275059, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.278 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Started (Lifecycle Event)#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.284 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.290 2 INFO nova.virt.libvirt.driver [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance spawned successfully.#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.290 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:30:55 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.297 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.299 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:55 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d688486329266ad986868fb626471e07a89a9808e3f1f1d1725bea2902a120a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.307 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.307 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.307 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.308 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.308 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.308 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.314 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.315 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408255.2759573, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.315 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:30:55 np0005466030 podman[261120]: 2025-10-02 12:30:55.324043394 +0000 UTC m=+0.174234714 container init 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:30:55 np0005466030 podman[261120]: 2025-10-02 12:30:55.336364021 +0000 UTC m=+0.186555311 container start 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.340 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.344 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408255.2823753, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.345 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.363 2 INFO nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 9.38 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.364 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:55 np0005466030 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : New worker (261141) forked
Oct  2 08:30:55 np0005466030 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : Loading success.
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.365 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.374 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.411 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.435 2 INFO nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 11.08 seconds to build instance.#033[00m
Oct  2 08:30:55 np0005466030 nova_compute[230518]: 2025-10-02 12:30:55.451 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:30:56 np0005466030 nova_compute[230518]: 2025-10-02 12:30:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:56.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:56.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:56 np0005466030 podman[261151]: 2025-10-02 12:30:56.827588574 +0000 UTC m=+0.077281332 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:30:56 np0005466030 podman[261150]: 2025-10-02 12:30:56.840635214 +0000 UTC m=+0.086269424 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:30:57 np0005466030 nova_compute[230518]: 2025-10-02 12:30:57.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:57 np0005466030 nova_compute[230518]: 2025-10-02 12:30:57.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005466030 NetworkManager[44960]: <info>  [1759408258.2870] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct  2 08:30:58 np0005466030 NetworkManager[44960]: <info>  [1759408258.2885] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.328 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.329 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.329 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.329 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:30:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:58.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:30:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:30:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:58Z|00320|binding|INFO|Releasing lport e11ef0cd-0cee-4792-8183-b5f339fd3fb3 from this chassis (sb_readonly=0)
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005466030 nova_compute[230518]: 2025-10-02 12:30:58.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:30:59 np0005466030 ovn_controller[129257]: 2025-10-02T12:30:59Z|00321|binding|INFO|Releasing lport e11ef0cd-0cee-4792-8183-b5f339fd3fb3 from this chassis (sb_readonly=0)
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:30:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:30:59.293 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.770 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.798 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.799 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.822 2 DEBUG nova.compute.manager [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.822 2 DEBUG nova.compute.manager [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing instance network info cache due to event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.823 2 DEBUG oslo_concurrency.lockutils [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.823 2 DEBUG oslo_concurrency.lockutils [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:30:59 np0005466030 nova_compute[230518]: 2025-10-02 12:30:59.823 2 DEBUG nova.network.neutron [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:31:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:00.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:02.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.472 2 DEBUG nova.network.neutron [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updated VIF entry in instance network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.472 2 DEBUG nova.network.neutron [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.753 2 DEBUG oslo_concurrency.lockutils [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.960 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408247.9589188, 60c7cce3-3461-4cad-a135-46f35e607214 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.960 2 INFO nova.compute.manager [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:31:02 np0005466030 nova_compute[230518]: 2025-10-02 12:31:02.998 2 DEBUG nova.compute.manager [None req-fe655adc-bf9c-45e6-b590-6bdcbadf03f9 - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:04.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:04.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:06.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:07 np0005466030 nova_compute[230518]: 2025-10-02 12:31:07.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:07 np0005466030 nova_compute[230518]: 2025-10-02 12:31:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:08.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:10.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:12.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:12 np0005466030 nova_compute[230518]: 2025-10-02 12:31:12.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:12 np0005466030 nova_compute[230518]: 2025-10-02 12:31:12.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:13Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:a3:65 10.100.0.10
Oct  2 08:31:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:13Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:a3:65 10.100.0.10
Oct  2 08:31:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:14.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:14.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:15 np0005466030 nova_compute[230518]: 2025-10-02 12:31:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:15 np0005466030 nova_compute[230518]: 2025-10-02 12:31:15.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:31:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:16.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:16 np0005466030 podman[261194]: 2025-10-02 12:31:16.81794803 +0000 UTC m=+0.058433790 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:31:16 np0005466030 podman[261193]: 2025-10-02 12:31:16.845382063 +0000 UTC m=+0.093553194 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:31:17 np0005466030 nova_compute[230518]: 2025-10-02 12:31:17.460 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:31:17 np0005466030 nova_compute[230518]: 2025-10-02 12:31:17.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:17 np0005466030 nova_compute[230518]: 2025-10-02 12:31:17.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:18.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:20.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:22.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:22.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:22 np0005466030 nova_compute[230518]: 2025-10-02 12:31:22.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:22 np0005466030 nova_compute[230518]: 2025-10-02 12:31:22.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:24.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:25 np0005466030 nova_compute[230518]: 2025-10-02 12:31:25.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:25.928 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:25.928 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:25.929 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:27 np0005466030 nova_compute[230518]: 2025-10-02 12:31:27.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:27 np0005466030 podman[261239]: 2025-10-02 12:31:27.813044111 +0000 UTC m=+0.058693549 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:31:27 np0005466030 podman[261240]: 2025-10-02 12:31:27.841757894 +0000 UTC m=+0.075911740 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:31:27 np0005466030 nova_compute[230518]: 2025-10-02 12:31:27.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:28.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:30.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:32 np0005466030 nova_compute[230518]: 2025-10-02 12:31:32.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:32 np0005466030 nova_compute[230518]: 2025-10-02 12:31:32.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.487 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.487 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.487 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.488 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.488 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.489 2 INFO nova.compute.manager [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Terminating instance#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.490 2 DEBUG nova.compute.manager [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:31:33 np0005466030 kernel: tap4a337dd1-6b (unregistering): left promiscuous mode
Oct  2 08:31:33 np0005466030 NetworkManager[44960]: <info>  [1759408293.5566] device (tap4a337dd1-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:33Z|00322|binding|INFO|Releasing lport 4a337dd1-6b9e-4a79-893f-de7400180a58 from this chassis (sb_readonly=0)
Oct  2 08:31:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:33Z|00323|binding|INFO|Setting lport 4a337dd1-6b9e-4a79-893f-de7400180a58 down in Southbound
Oct  2 08:31:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:33Z|00324|binding|INFO|Removing iface tap4a337dd1-6b ovn-installed in OVS
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.583 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:a3:65 10.100.0.10'], port_security=['fa:16:3e:e1:a3:65 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '86aa4a20-96fe-4862-a0c5-04ad92e40f1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a70008e0fc32481f8ed89060220b28d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b5e4d5c-c242-45f9-85c8-d58de980e569', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4850e48d-a493-4bb4-bb29-020fdb04c9bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4a337dd1-6b9e-4a79-893f-de7400180a58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.584 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4a337dd1-6b9e-4a79-893f-de7400180a58 in datapath 41e6c621-c2f2-4fb3-a93d-8eda22ec0438 unbound from our chassis#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.585 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41e6c621-c2f2-4fb3-a93d-8eda22ec0438, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6eae9624-45d2-4bc5-afb9-f424f8984baa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.587 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 namespace which is not needed anymore#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:33 np0005466030 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct  2 08:31:33 np0005466030 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004a.scope: Consumed 14.528s CPU time.
Oct  2 08:31:33 np0005466030 systemd-machined[188247]: Machine qemu-37-instance-0000004a terminated.
Oct  2 08:31:33 np0005466030 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : haproxy version is 2.8.14-c23fe91
Oct  2 08:31:33 np0005466030 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : path to executable is /usr/sbin/haproxy
Oct  2 08:31:33 np0005466030 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [WARNING]  (261139) : Exiting Master process...
Oct  2 08:31:33 np0005466030 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [ALERT]    (261139) : Current worker (261141) exited with code 143 (Terminated)
Oct  2 08:31:33 np0005466030 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [WARNING]  (261139) : All workers exited. Exiting... (0)
Oct  2 08:31:33 np0005466030 systemd[1]: libpod-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633.scope: Deactivated successfully.
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.722 2 INFO nova.virt.libvirt.driver [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance destroyed successfully.#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.723 2 DEBUG nova.objects.instance [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lazy-loading 'resources' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:33 np0005466030 podman[261303]: 2025-10-02 12:31:33.728227297 +0000 UTC m=+0.053521455 container died 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:31:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633-userdata-shm.mount: Deactivated successfully.
Oct  2 08:31:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay-d688486329266ad986868fb626471e07a89a9808e3f1f1d1725bea2902a120a5-merged.mount: Deactivated successfully.
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.760 2 DEBUG nova.virt.libvirt.vif [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-606434201',display_name='tempest-ServersTestJSON-server-606434201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-606434201',id=74,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOSVA0UqkFnW7YuNQRsY27cJ9a2M1giyjrSsiXcPBlr5myx9iDFttDeAnJV4hFOON30Ktzf3cWWwY2KcF8NnunVbyoieS3+bfZniZlgOmGbNNyoaXEKseU0cJsJiwxrpQ==',key_name='tempest-keypair-2131917532',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a70008e0fc32481f8ed89060220b28d7',ramdisk_id='',reservation_id='r-0450lgko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-128626597',owner_user_name='tempest-ServersTestJSON-128626597-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eed4fdf8b49f41bfb982bc858fa76bef',uuid=86aa4a20-96fe-4862-a0c5-04ad92e40f1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.761 2 DEBUG nova.network.os_vif_util [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converting VIF {"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.762 2 DEBUG nova.network.os_vif_util [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.762 2 DEBUG os_vif [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:31:33 np0005466030 podman[261303]: 2025-10-02 12:31:33.763801987 +0000 UTC m=+0.089096145 container cleanup 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a337dd1-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.769 2 INFO os_vif [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b')#033[00m
Oct  2 08:31:33 np0005466030 systemd[1]: libpod-conmon-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633.scope: Deactivated successfully.
Oct  2 08:31:33 np0005466030 podman[261344]: 2025-10-02 12:31:33.824444715 +0000 UTC m=+0.039699770 container remove 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81bc33c0-5835-4b02-be19-ab3b2a10af05]: (4, ('Thu Oct  2 12:31:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 (31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633)\n31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633\nThu Oct  2 12:31:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 (31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633)\n31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.832 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[145cce40-0bad-4bd1-adc3-2c44a50770c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.833 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41e6c621-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 kernel: tap41e6c621-c0: left promiscuous mode
Oct  2 08:31:33 np0005466030 nova_compute[230518]: 2025-10-02 12:31:33.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.892 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3ede455e-b1bf-4f74-b51a-55aee4e8ce14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.924 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[70545cc7-6cf7-4322-8c09-50857d5bd903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.926 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0bf435-61e4-4a87-be0f-948d7de14aa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.944 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bea2b7e2-30e1-40c4-bd46-59f0a9554997]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611796, 'reachable_time': 19517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261376, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.946 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:31:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.946 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c296b66-3eef-4d80-82d8-5ef0c4b60fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:33 np0005466030 systemd[1]: run-netns-ovnmeta\x2d41e6c621\x2dc2f2\x2d4fb3\x2da93d\x2d8eda22ec0438.mount: Deactivated successfully.
Oct  2 08:31:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-unplugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] No waiting events found dispatching network-vif-unplugged-4a337dd1-6b9e-4a79-893f-de7400180a58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-unplugged-4a337dd1-6b9e-4a79-893f-de7400180a58 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] No waiting events found dispatching network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.073 2 WARNING nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received unexpected event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:31:35 np0005466030 nova_compute[230518]: 2025-10-02 12:31:35.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:36.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:36 np0005466030 nova_compute[230518]: 2025-10-02 12:31:36.718 2 INFO nova.virt.libvirt.driver [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deleting instance files /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_del#033[00m
Oct  2 08:31:36 np0005466030 nova_compute[230518]: 2025-10-02 12:31:36.718 2 INFO nova.virt.libvirt.driver [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deletion of /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_del complete#033[00m
Oct  2 08:31:36 np0005466030 nova_compute[230518]: 2025-10-02 12:31:36.814 2 INFO nova.compute.manager [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 3.32 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:31:36 np0005466030 nova_compute[230518]: 2025-10-02 12:31:36.814 2 DEBUG oslo.service.loopingcall [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:31:36 np0005466030 nova_compute[230518]: 2025-10-02 12:31:36.814 2 DEBUG nova.compute.manager [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:31:36 np0005466030 nova_compute[230518]: 2025-10-02 12:31:36.814 2 DEBUG nova.network.neutron [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:31:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Oct  2 08:31:37 np0005466030 nova_compute[230518]: 2025-10-02 12:31:37.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:38 np0005466030 nova_compute[230518]: 2025-10-02 12:31:38.630 2 DEBUG nova.network.neutron [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:38 np0005466030 nova_compute[230518]: 2025-10-02 12:31:38.739 2 INFO nova.compute.manager [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 1.92 seconds to deallocate network for instance.#033[00m
Oct  2 08:31:38 np0005466030 nova_compute[230518]: 2025-10-02 12:31:38.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:38 np0005466030 nova_compute[230518]: 2025-10-02 12:31:38.804 2 DEBUG nova.compute.manager [req-e9748a35-a894-4f1d-a3ce-5cb66c4972a6 req-5f957771-4f25-463e-b5bb-5b9b42303e6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-deleted-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:38 np0005466030 nova_compute[230518]: 2025-10-02 12:31:38.847 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:38 np0005466030 nova_compute[230518]: 2025-10-02 12:31:38.847 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:38 np0005466030 nova_compute[230518]: 2025-10-02 12:31:38.924 2 DEBUG oslo_concurrency.processutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2973912813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:39 np0005466030 nova_compute[230518]: 2025-10-02 12:31:39.366 2 DEBUG oslo_concurrency.processutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:39 np0005466030 nova_compute[230518]: 2025-10-02 12:31:39.372 2 DEBUG nova.compute.provider_tree [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:31:39 np0005466030 nova_compute[230518]: 2025-10-02 12:31:39.425 2 DEBUG nova.scheduler.client.report [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:31:39 np0005466030 nova_compute[230518]: 2025-10-02 12:31:39.508 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:39 np0005466030 nova_compute[230518]: 2025-10-02 12:31:39.617 2 INFO nova.scheduler.client.report [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Deleted allocations for instance 86aa4a20-96fe-4862-a0c5-04ad92e40f1b#033[00m
Oct  2 08:31:39 np0005466030 nova_compute[230518]: 2025-10-02 12:31:39.739 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:40.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:42.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:42 np0005466030 nova_compute[230518]: 2025-10-02 12:31:42.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:43 np0005466030 nova_compute[230518]: 2025-10-02 12:31:43.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.002 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.002 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.019 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.141 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.141 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.148 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.148 2 INFO nova.compute.claims [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.252 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3767337922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.693 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.701 2 DEBUG nova.compute.provider_tree [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.735 2 DEBUG nova.scheduler.client.report [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.759 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.760 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.817 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.817 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.844 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:31:44 np0005466030 nova_compute[230518]: 2025-10-02 12:31:44.873 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.038 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.040 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.040 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating image(s)#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.072 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:45 np0005466030 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.112 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.150 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.156 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.202 2 DEBUG nova.policy [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71d69bc37f274fad8a0b06c0b96f2a64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.248 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.249 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.249 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.250 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.282 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.287 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3e490470-5e33-4140-95c1-367805364c73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:45 np0005466030 nova_compute[230518]: 2025-10-02 12:31:45.938 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3e490470-5e33-4140-95c1-367805364c73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.049 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image 3e490470-5e33-4140-95c1-367805364c73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.109 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Successfully created port: a3bd0009-d256-4937-bdad-606abfd076e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.277 2 DEBUG nova.objects.instance [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.293 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.293 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Ensure instance console log exists: /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.294 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.295 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:46 np0005466030 nova_compute[230518]: 2025-10-02 12:31:46.295 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.000 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Successfully updated port: a3bd0009-d256-4937-bdad-606abfd076e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.019 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.019 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.020 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.082 2 DEBUG nova.compute.manager [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.082 2 DEBUG nova.compute.manager [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing instance network info cache due to event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.082 2 DEBUG oslo_concurrency.lockutils [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:47 np0005466030 podman[261614]: 2025-10-02 12:31:47.106272093 +0000 UTC m=+0.074239557 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.159 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:31:47 np0005466030 podman[261613]: 2025-10-02 12:31:47.160741346 +0000 UTC m=+0.137479146 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:47 np0005466030 nova_compute[230518]: 2025-10-02 12:31:47.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:48 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.356 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.379 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.379 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance network_info: |[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.380 2 DEBUG oslo_concurrency.lockutils [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.380 2 DEBUG nova.network.neutron [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.383 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.392 2 WARNING nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.407 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.409 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.419 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.420 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.422 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:31:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.423 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.424 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.424 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:31:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999974s ======
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.426 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:31:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:48.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999974s
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.426 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.426 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.427 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.431 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.471 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.472 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.498 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:48.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.721 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408293.7195008, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.722 2 INFO nova.compute.manager [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.753 2 DEBUG nova.compute.manager [None req-a49466f4-aa5d-4b9c-871e-2cf5c6fdf966 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:31:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4138466693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.881 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.926 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:48 np0005466030 nova_compute[230518]: 2025-10-02 12:31:48.932 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2173979604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.113 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.317 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.319 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4555MB free_disk=20.910640716552734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.319 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.320 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.399 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.400 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.400 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3674890317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.434 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.437 2 DEBUG nova.virt.libvirt.vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.437 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.439 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.441 2 DEBUG nova.objects.instance [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.443 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.501 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <name>instance-0000004d</name>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:31:48</nova:creationTime>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:7b:e8:97"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <target dev="tapa3bd0009-d2"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:31:49 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:31:49 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:31:49 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:31:49 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.503 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Preparing to wait for external event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.504 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.505 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.505 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.507 2 DEBUG nova.virt.libvirt.vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.507 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.509 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.509 2 DEBUG os_vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:49 np0005466030 NetworkManager[44960]: <info>  [1759408309.5254] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.535 2 INFO os_vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.604 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.606 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.606 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:7b:e8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.608 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Using config drive#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.640 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:31:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/443590527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.887 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.894 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.908 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.931 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.981 2 DEBUG nova.network.neutron [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated VIF entry in instance network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:31:49 np0005466030 nova_compute[230518]: 2025-10-02 12:31:49.982 2 DEBUG nova.network.neutron [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.003 2 DEBUG oslo_concurrency.lockutils [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.104 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating config drive at /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.113 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparxrg3gf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.274 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparxrg3gf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.302 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.306 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config 3e490470-5e33-4140-95c1-367805364c73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:31:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:31:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:50.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:31:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.513 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.895 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config 3e490470-5e33-4140-95c1-367805364c73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.896 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deleting local config drive /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config because it was imported into RBD.#033[00m
Oct  2 08:31:50 np0005466030 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct  2 08:31:50 np0005466030 nova_compute[230518]: 2025-10-02 12:31:50.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:50 np0005466030 NetworkManager[44960]: <info>  [1759408310.9654] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Oct  2 08:31:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:50Z|00325|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct  2 08:31:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:50Z|00326|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:31:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:50.977 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:50.980 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:31:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:50.983 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:31:50 np0005466030 systemd-udevd[261948]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:31:51 np0005466030 systemd-machined[188247]: New machine qemu-38-instance-0000004d.
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.004 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45b2a838-0724-480f-8e22-23ccd894c997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.006 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.008 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.008 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d07694cb-d7d1-4bf2-adf4-35cb3255041c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.010 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6392a7f3-3aa4-49fe-b1eb-587bfc9fda39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 NetworkManager[44960]: <info>  [1759408311.0158] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:31:51 np0005466030 NetworkManager[44960]: <info>  [1759408311.0170] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:31:51 np0005466030 systemd[1]: Started Virtual Machine qemu-38-instance-0000004d.
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.027 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1fe872-9d3a-42f6-be3f-fc69ab797b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:51Z|00327|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct  2 08:31:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:51Z|00328|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.094 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[65dac60a-e3ef-4d30-86af-341512de82cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.127 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7c45da37-bcb8-4cbc-8d93-360e39476ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.133 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a326a04e-de2d-4121-8b03-05b1feb49599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 systemd-udevd[261953]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:31:51 np0005466030 NetworkManager[44960]: <info>  [1759408311.1342] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/154)
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.167 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf38de8-f634-4d95-a693-bec2d0d16170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.170 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[626d386e-6d5c-4d3c-a231-fa3298cb95d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 NetworkManager[44960]: <info>  [1759408311.1905] device (tapf011efa4-00): carrier: link connected
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.195 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa4d261-90f9-45e7-a95a-d87b17435167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.211 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4048fc72-9ad2-4f0b-b93e-ee193b358869]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617474, 'reachable_time': 40178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261982, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[13d74ed5-a9b8-49bb-8194-d8933907ef86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617474, 'tstamp': 617474}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261983, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.244 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e43e7b7d-5fb6-4031-abf2-a1e84124bb30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617474, 'reachable_time': 40178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261984, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.289 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05381acf-99e5-40b2-826b-bd6eb4d69128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG nova.compute.manager [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG oslo_concurrency.lockutils [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG oslo_concurrency.lockutils [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG oslo_concurrency.lockutils [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.325 2 DEBUG nova.compute.manager [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Processing event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[993eca3a-38d6-4961-9a0e-e90fc4607548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.351 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.352 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.352 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:51 np0005466030 NetworkManager[44960]: <info>  [1759408311.3545] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct  2 08:31:51 np0005466030 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.357 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:51Z|00329|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.361 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.361 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad29b4f-c853-402a-9683-a2ce7573a863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.362 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:31:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.363 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:51 np0005466030 podman[262057]: 2025-10-02 12:31:51.71851113 +0000 UTC m=+0.068587599 container create 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:31:51 np0005466030 systemd[1]: Started libpod-conmon-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a.scope.
Oct  2 08:31:51 np0005466030 podman[262057]: 2025-10-02 12:31:51.679953207 +0000 UTC m=+0.030029666 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:31:51 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:31:51 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849311bb380bc31d9f4d64ffecfbd0206e5d4571354752fcb4fe618ec50db739/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:31:51 np0005466030 podman[262057]: 2025-10-02 12:31:51.834232901 +0000 UTC m=+0.184309390 container init 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:31:51 np0005466030 podman[262057]: 2025-10-02 12:31:51.840049835 +0000 UTC m=+0.190126274 container start 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.864 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408311.8636868, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.864 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.866 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:31:51 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : New worker (262078) forked
Oct  2 08:31:51 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : Loading success.
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.871 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.875 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance spawned successfully.#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.875 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.897 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.903 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.905 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.905 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.906 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.906 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.906 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.907 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.950 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.951 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408311.8661175, 3e490470-5e33-4140-95c1-367805364c73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.951 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.978 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.984 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408311.8701344, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.984 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.995 2 INFO nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 6.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:31:51 np0005466030 nova_compute[230518]: 2025-10-02 12:31:51.995 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:52 np0005466030 nova_compute[230518]: 2025-10-02 12:31:52.005 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:31:52 np0005466030 nova_compute[230518]: 2025-10-02 12:31:52.008 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:31:52 np0005466030 nova_compute[230518]: 2025-10-02 12:31:52.043 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:31:52 np0005466030 nova_compute[230518]: 2025-10-02 12:31:52.079 2 INFO nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 8.00 seconds to build instance.#033[00m
Oct  2 08:31:52 np0005466030 nova_compute[230518]: 2025-10-02 12:31:52.094 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:52.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:52.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:52 np0005466030 nova_compute[230518]: 2025-10-02 12:31:52.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:53 np0005466030 nova_compute[230518]: 2025-10-02 12:31:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.211 2 DEBUG nova.compute.manager [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.212 2 DEBUG oslo_concurrency.lockutils [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.212 2 DEBUG oslo_concurrency.lockutils [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.213 2 DEBUG oslo_concurrency.lockutils [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.213 2 DEBUG nova.compute.manager [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.214 2 WARNING nova.compute.manager [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:31:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:54 np0005466030 nova_compute[230518]: 2025-10-02 12:31:54.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:55 np0005466030 NetworkManager[44960]: <info>  [1759408315.1765] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Oct  2 08:31:55 np0005466030 NetworkManager[44960]: <info>  [1759408315.1776] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Oct  2 08:31:55 np0005466030 nova_compute[230518]: 2025-10-02 12:31:55.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:55 np0005466030 nova_compute[230518]: 2025-10-02 12:31:55.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:31:55Z|00330|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:31:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Oct  2 08:31:55 np0005466030 nova_compute[230518]: 2025-10-02 12:31:55.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:31:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:31:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:31:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:56.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:31:56 np0005466030 nova_compute[230518]: 2025-10-02 12:31:56.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:56.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:57 np0005466030 nova_compute[230518]: 2025-10-02 12:31:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:57.076 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:31:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:31:57.077 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:31:57 np0005466030 nova_compute[230518]: 2025-10-02 12:31:57.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:57 np0005466030 nova_compute[230518]: 2025-10-02 12:31:57.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:31:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:58.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:31:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:31:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:58.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:31:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:31:58 np0005466030 nova_compute[230518]: 2025-10-02 12:31:58.695 2 DEBUG nova.compute.manager [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:31:58 np0005466030 nova_compute[230518]: 2025-10-02 12:31:58.695 2 DEBUG nova.compute.manager [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing instance network info cache due to event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:31:58 np0005466030 nova_compute[230518]: 2025-10-02 12:31:58.696 2 DEBUG oslo_concurrency.lockutils [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:58 np0005466030 nova_compute[230518]: 2025-10-02 12:31:58.696 2 DEBUG oslo_concurrency.lockutils [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:31:58 np0005466030 nova_compute[230518]: 2025-10-02 12:31:58.696 2 DEBUG nova.network.neutron [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:31:58 np0005466030 podman[262139]: 2025-10-02 12:31:58.82638087 +0000 UTC m=+0.066930827 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:31:58 np0005466030 podman[262138]: 2025-10-02 12:31:58.839079209 +0000 UTC m=+0.090568661 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:31:59 np0005466030 nova_compute[230518]: 2025-10-02 12:31:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:31:59 np0005466030 nova_compute[230518]: 2025-10-02 12:31:59.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:31:59 np0005466030 nova_compute[230518]: 2025-10-02 12:31:59.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:31:59 np0005466030 nova_compute[230518]: 2025-10-02 12:31:59.408 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:31:59 np0005466030 nova_compute[230518]: 2025-10-02 12:31:59.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:00.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:00.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:01 np0005466030 nova_compute[230518]: 2025-10-02 12:32:01.243 2 DEBUG nova.network.neutron [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated VIF entry in instance network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:32:01 np0005466030 nova_compute[230518]: 2025-10-02 12:32:01.244 2 DEBUG nova.network.neutron [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:01 np0005466030 nova_compute[230518]: 2025-10-02 12:32:01.272 2 DEBUG oslo_concurrency.lockutils [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:01 np0005466030 nova_compute[230518]: 2025-10-02 12:32:01.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:01 np0005466030 nova_compute[230518]: 2025-10-02 12:32:01.273 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:32:01 np0005466030 nova_compute[230518]: 2025-10-02 12:32:01.273 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:02.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:02.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:02 np0005466030 nova_compute[230518]: 2025-10-02 12:32:02.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:04.079 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:32:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:04.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:32:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:04.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:04 np0005466030 nova_compute[230518]: 2025-10-02 12:32:04.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:04Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:32:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:04Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:32:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:32:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:32:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:32:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:32:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:06.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:06 np0005466030 nova_compute[230518]: 2025-10-02 12:32:06.722 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:06 np0005466030 nova_compute[230518]: 2025-10-02 12:32:06.768 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:06 np0005466030 nova_compute[230518]: 2025-10-02 12:32:06.769 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:32:07 np0005466030 nova_compute[230518]: 2025-10-02 12:32:07.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:07 np0005466030 nova_compute[230518]: 2025-10-02 12:32:07.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:09 np0005466030 nova_compute[230518]: 2025-10-02 12:32:09.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:12.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:12 np0005466030 nova_compute[230518]: 2025-10-02 12:32:12.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:13 np0005466030 nova_compute[230518]: 2025-10-02 12:32:13.040 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:13 np0005466030 nova_compute[230518]: 2025-10-02 12:32:13.041 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:13 np0005466030 nova_compute[230518]: 2025-10-02 12:32:13.042 2 INFO nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Rebooting instance#033[00m
Oct  2 08:32:13 np0005466030 nova_compute[230518]: 2025-10-02 12:32:13.058 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:13 np0005466030 nova_compute[230518]: 2025-10-02 12:32:13.059 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:13 np0005466030 nova_compute[230518]: 2025-10-02 12:32:13.060 2 DEBUG nova.network.neutron [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:32:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:14 np0005466030 nova_compute[230518]: 2025-10-02 12:32:14.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:14 np0005466030 nova_compute[230518]: 2025-10-02 12:32:14.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:15 np0005466030 nova_compute[230518]: 2025-10-02 12:32:15.766 2 DEBUG nova.network.neutron [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:15 np0005466030 nova_compute[230518]: 2025-10-02 12:32:15.788 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:15 np0005466030 nova_compute[230518]: 2025-10-02 12:32:15.791 2 DEBUG nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:16 np0005466030 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct  2 08:32:16 np0005466030 NetworkManager[44960]: <info>  [1759408336.4272] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:32:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:16Z|00331|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct  2 08:32:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:16Z|00332|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct  2 08:32:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:16Z|00333|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.458 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.461 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.466 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.467 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[94655b03-a2ac-4596-b6c9-afd6e96f6c21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.469 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:16 np0005466030 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct  2 08:32:16 np0005466030 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004d.scope: Consumed 13.895s CPU time.
Oct  2 08:32:16 np0005466030 systemd-machined[188247]: Machine qemu-38-instance-0000004d terminated.
Oct  2 08:32:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:16.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.546 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.547 2 DEBUG nova.objects.instance [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.565 2 DEBUG nova.virt.libvirt.vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.566 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.567 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.567 2 DEBUG os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.581 2 INFO os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.589 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.592 2 WARNING nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.597 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.598 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.601 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.601 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.603 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.603 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.604 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.604 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.607 2 DEBUG nova.objects.instance [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.626 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:16 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : haproxy version is 2.8.14-c23fe91
Oct  2 08:32:16 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : path to executable is /usr/sbin/haproxy
Oct  2 08:32:16 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [WARNING]  (262076) : Exiting Master process...
Oct  2 08:32:16 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [ALERT]    (262076) : Current worker (262078) exited with code 143 (Terminated)
Oct  2 08:32:16 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [WARNING]  (262076) : All workers exited. Exiting... (0)
Oct  2 08:32:16 np0005466030 systemd[1]: libpod-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a.scope: Deactivated successfully.
Oct  2 08:32:16 np0005466030 podman[262213]: 2025-10-02 12:32:16.667641431 +0000 UTC m=+0.065721130 container died 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:16 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:32:16 np0005466030 systemd[1]: var-lib-containers-storage-overlay-849311bb380bc31d9f4d64ffecfbd0206e5d4571354752fcb4fe618ec50db739-merged.mount: Deactivated successfully.
Oct  2 08:32:16 np0005466030 podman[262213]: 2025-10-02 12:32:16.723157417 +0000 UTC m=+0.121237086 container cleanup 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:32:16 np0005466030 systemd[1]: libpod-conmon-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a.scope: Deactivated successfully.
Oct  2 08:32:16 np0005466030 podman[262245]: 2025-10-02 12:32:16.810589188 +0000 UTC m=+0.055941281 container remove 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.818 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f044c88a-6fc8-451b-914f-a94fd6d8cb42]: (4, ('Thu Oct  2 12:32:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a)\n3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a\nThu Oct  2 12:32:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a)\n3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.820 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[28ed2b1e-4ac6-4441-8ddd-901e60d197de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:16 np0005466030 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.860 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6534ad7b-de91-40d4-9766-63df5cd64c0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:16 np0005466030 nova_compute[230518]: 2025-10-02 12:32:16.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.891 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab9725e-7367-4a46-9893-aed74b8a2b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4775a2-fd77-48a0-9719-bdf292ad4a75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.920 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0b1349-dceb-46f2-a611-b4cfbe64da2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617468, 'reachable_time': 36269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262279, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:16 np0005466030 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.927 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:32:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.928 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8896ad-df40-4b49-9198-89a43487bba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2350958497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.099 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.132 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.366 2 DEBUG nova.compute.manager [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.367 2 DEBUG oslo_concurrency.lockutils [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.368 2 DEBUG oslo_concurrency.lockutils [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.369 2 DEBUG oslo_concurrency.lockutils [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.369 2 DEBUG nova.compute.manager [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.369 2 WARNING nova.compute.manager [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 08:32:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3187127440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.549 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.550 2 DEBUG nova.virt.libvirt.vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.551 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.552 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.554 2 DEBUG nova.objects.instance [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.573 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <name>instance-0000004d</name>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:32:16</nova:creationTime>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:7b:e8:97"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <target dev="tapa3bd0009-d2"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:32:17 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:32:17 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:32:17 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:32:17 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.576 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.577 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.579 2 DEBUG nova.virt.libvirt.vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.579 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.581 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.582 2 DEBUG os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.589 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:17 np0005466030 NetworkManager[44960]: <info>  [1759408337.5940] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.600 2 INFO os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:32:17 np0005466030 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct  2 08:32:17 np0005466030 NetworkManager[44960]: <info>  [1759408337.7191] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Oct  2 08:32:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:17Z|00334|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct  2 08:32:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:17Z|00335|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466030 systemd-udevd[262184]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:17 np0005466030 podman[262325]: 2025-10-02 12:32:17.729692387 +0000 UTC m=+0.082590350 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:32:17 np0005466030 NetworkManager[44960]: <info>  [1759408337.7490] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:32:17 np0005466030 NetworkManager[44960]: <info>  [1759408337.7505] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:32:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:17Z|00336|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466030 systemd-machined[188247]: New machine qemu-39-instance-0000004d.
Oct  2 08:32:17 np0005466030 systemd[1]: Started Virtual Machine qemu-39-instance-0000004d.
Oct  2 08:32:17 np0005466030 podman[262324]: 2025-10-02 12:32:17.793570677 +0000 UTC m=+0.155468093 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:32:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:17Z|00337|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.862 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.864 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.867 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:32:17 np0005466030 nova_compute[230518]: 2025-10-02 12:32:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.885 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26885a82-8604-4682-bdae-e58733218198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.887 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.890 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.890 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfa53e5-c4dd-4963-b510-23fa41a14ca1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.892 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ba245f-d9b5-4285-83db-46212a27df49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.910 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3b6eb1-c11e-4d90-b3d2-6647ab7b0de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.943 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5081aae8-c5f7-4795-9b3f-e5dbc5e4fa70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.991 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1566ad93-26ef-4c64-970f-ff28f121885d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.002 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4da4fd09-d29f-4f32-a1b8-ed22ad372805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 NetworkManager[44960]: <info>  [1759408338.0039] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.057 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3849223-2dcf-43d7-b35f-fb9ad8d18849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.062 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5590dca6-8587-441d-87be-bd964d187ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 NetworkManager[44960]: <info>  [1759408338.0961] device (tapf011efa4-00): carrier: link connected
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.103 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d0b94d-8536-4a0b-95cc-d51bc9280546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.126 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8e6cc9-45a1-40c2-9e42-55f35412faf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620165, 'reachable_time': 36988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262419, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.155 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bf0ef8-2796-4b71-ac13-108eb4c577cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 620165, 'tstamp': 620165}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262420, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.183 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f18423e3-1023-47bd-ab49-7a510e368627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620165, 'reachable_time': 36988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262429, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.232 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f29fa9-bf94-4910-b5d4-a40d9ec975e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f101f1-c5c8-4bbe-8002-18b3dca09512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.325 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.325 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.326 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:18 np0005466030 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:32:18 np0005466030 nova_compute[230518]: 2025-10-02 12:32:18.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:18 np0005466030 NetworkManager[44960]: <info>  [1759408338.3303] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct  2 08:32:18 np0005466030 nova_compute[230518]: 2025-10-02 12:32:18.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.337 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:18 np0005466030 nova_compute[230518]: 2025-10-02 12:32:18.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:18Z|00338|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:32:18 np0005466030 nova_compute[230518]: 2025-10-02 12:32:18.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.367 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[36c74f14-fb24-4b89-a9a6-d23c14e5ac6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.370 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.371 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:32:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.003999965s ======
Oct  2 08:32:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:18.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003999965s
Oct  2 08:32:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:18.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:18 np0005466030 podman[262464]: 2025-10-02 12:32:18.821175768 +0000 UTC m=+0.077054064 container create 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:18 np0005466030 systemd[1]: Started libpod-conmon-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77.scope.
Oct  2 08:32:18 np0005466030 podman[262464]: 2025-10-02 12:32:18.790426141 +0000 UTC m=+0.046304467 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:32:18 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:32:18 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9437f9d440afe338a712b53f4be0d5ad2304ed341c56636f129f0dcfa47d05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:32:18 np0005466030 podman[262464]: 2025-10-02 12:32:18.917174479 +0000 UTC m=+0.173052815 container init 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:32:18 np0005466030 podman[262464]: 2025-10-02 12:32:18.927529435 +0000 UTC m=+0.183407731 container start 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:32:18 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : New worker (262509) forked
Oct  2 08:32:18 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : Loading success.
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.417 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.418 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408339.4169931, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.418 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.420 2 DEBUG nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.423 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance rebooted successfully.#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.423 2 DEBUG nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.446 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.449 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.487 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.487 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408339.4182177, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.488 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.493 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.508 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.510 2 WARNING nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.511 2 WARNING nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.512 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.512 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.512 2 WARNING nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.517 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.520 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:19 np0005466030 nova_compute[230518]: 2025-10-02 12:32:19.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:20.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:20.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:20 np0005466030 nova_compute[230518]: 2025-10-02 12:32:20.657 2 INFO nova.compute.manager [None req-0db206ec-f07e-4c6b-bab7-4f0e6de41099 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Get console output#033[00m
Oct  2 08:32:20 np0005466030 nova_compute[230518]: 2025-10-02 12:32:20.662 2 INFO oslo.privsep.daemon [None req-0db206ec-f07e-4c6b-bab7-4f0e6de41099 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpq1o9xy_t/privsep.sock']#033[00m
Oct  2 08:32:21 np0005466030 nova_compute[230518]: 2025-10-02 12:32:21.420 2 INFO oslo.privsep.daemon [None req-0db206ec-f07e-4c6b-bab7-4f0e6de41099 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  2 08:32:21 np0005466030 nova_compute[230518]: 2025-10-02 12:32:21.297 13161 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  2 08:32:21 np0005466030 nova_compute[230518]: 2025-10-02 12:32:21.300 13161 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  2 08:32:21 np0005466030 nova_compute[230518]: 2025-10-02 12:32:21.302 13161 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  2 08:32:21 np0005466030 nova_compute[230518]: 2025-10-02 12:32:21.302 13161 INFO oslo.privsep.daemon [-] privsep daemon running as pid 13161#033[00m
Oct  2 08:32:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:32:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:22.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:32:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:32:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:22.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:32:22 np0005466030 nova_compute[230518]: 2025-10-02 12:32:22.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:22 np0005466030 nova_compute[230518]: 2025-10-02 12:32:22.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:24.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:24.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:25.929 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:25.930 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.917 2 DEBUG oslo_concurrency.lockutils [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.917 2 DEBUG oslo_concurrency.lockutils [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.917 2 DEBUG nova.compute.manager [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.920 2 DEBUG nova.compute.manager [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.921 2 DEBUG nova.objects.instance [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:27 np0005466030 nova_compute[230518]: 2025-10-02 12:32:27.947 2 DEBUG nova.virt.libvirt.driver [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:32:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:29 np0005466030 podman[262524]: 2025-10-02 12:32:29.81615856 +0000 UTC m=+0.061268578 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:32:29 np0005466030 podman[262525]: 2025-10-02 12:32:29.820210058 +0000 UTC m=+0.064635195 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:32:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:31 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct  2 08:32:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:32.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:32 np0005466030 nova_compute[230518]: 2025-10-02 12:32:32.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:32 np0005466030 nova_compute[230518]: 2025-10-02 12:32:32.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:33Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:32:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:36.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:37 np0005466030 nova_compute[230518]: 2025-10-02 12:32:37.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:37 np0005466030 nova_compute[230518]: 2025-10-02 12:32:37.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:37 np0005466030 nova_compute[230518]: 2025-10-02 12:32:37.996 2 DEBUG nova.virt.libvirt.driver [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:32:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695580193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:40.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:41 np0005466030 nova_compute[230518]: 2025-10-02 12:32:41.013 2 INFO nova.virt.libvirt.driver [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:32:41 np0005466030 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct  2 08:32:41 np0005466030 NetworkManager[44960]: <info>  [1759408361.7659] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:32:41 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:41Z|00339|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct  2 08:32:41 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:41Z|00340|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct  2 08:32:41 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:41Z|00341|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct  2 08:32:41 np0005466030 nova_compute[230518]: 2025-10-02 12:32:41.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:41 np0005466030 nova_compute[230518]: 2025-10-02 12:32:41.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.780 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.782 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:32:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.784 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:32:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.785 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea120be1-cb79-4e50-9e58-dc63ae951785]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.785 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:32:41 np0005466030 nova_compute[230518]: 2025-10-02 12:32:41.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:41 np0005466030 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct  2 08:32:41 np0005466030 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000004d.scope: Consumed 14.162s CPU time.
Oct  2 08:32:41 np0005466030 systemd-machined[188247]: Machine qemu-39-instance-0000004d terminated.
Oct  2 08:32:41 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : haproxy version is 2.8.14-c23fe91
Oct  2 08:32:41 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : path to executable is /usr/sbin/haproxy
Oct  2 08:32:41 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [WARNING]  (262507) : Exiting Master process...
Oct  2 08:32:41 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [WARNING]  (262507) : Exiting Master process...
Oct  2 08:32:41 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [ALERT]    (262507) : Current worker (262509) exited with code 143 (Terminated)
Oct  2 08:32:41 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [WARNING]  (262507) : All workers exited. Exiting... (0)
Oct  2 08:32:41 np0005466030 systemd[1]: libpod-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77.scope: Deactivated successfully.
Oct  2 08:32:41 np0005466030 podman[262587]: 2025-10-02 12:32:41.959686939 +0000 UTC m=+0.060761272 container died 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:32:42 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77-userdata-shm.mount: Deactivated successfully.
Oct  2 08:32:42 np0005466030 systemd[1]: var-lib-containers-storage-overlay-ba9437f9d440afe338a712b53f4be0d5ad2304ed341c56636f129f0dcfa47d05-merged.mount: Deactivated successfully.
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.065 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.065 2 DEBUG nova.objects.instance [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.079 2 DEBUG nova.compute.manager [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:42 np0005466030 podman[262587]: 2025-10-02 12:32:42.11195036 +0000 UTC m=+0.213024713 container cleanup 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:42 np0005466030 systemd[1]: libpod-conmon-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77.scope: Deactivated successfully.
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.129 2 DEBUG oslo_concurrency.lockutils [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:42 np0005466030 podman[262630]: 2025-10-02 12:32:42.279781641 +0000 UTC m=+0.131224760 container remove 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.291 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a1e4e2-62fa-4e79-b327-f6e4d5f83065]: (4, ('Thu Oct  2 12:32:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77)\n97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77\nThu Oct  2 12:32:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77)\n97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.293 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92b29d48-6b53-4c31-98a1-41c435e89fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.294 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:42 np0005466030 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.330 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3074b181-d6bc-4b87-93de-9538f847702e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.357 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef48f6f-0eb7-459c-a564-47ef3c968368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.359 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83c6e85f-b3cf-48a2-96dd-68a1d29040f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e09623b6-f8a6-499a-a5d9-8d10238e86fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620154, 'reachable_time': 36565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262648, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005466030 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.392 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:32:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.392 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4be9ba8d-b67c-4874-bc2a-485d772c46fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.908 2 DEBUG nova.compute.manager [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.909 2 DEBUG oslo_concurrency.lockutils [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.909 2 DEBUG oslo_concurrency.lockutils [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.910 2 DEBUG oslo_concurrency.lockutils [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.910 2 DEBUG nova.compute.manager [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:42 np0005466030 nova_compute[230518]: 2025-10-02 12:32:42.911 2 WARNING nova.compute.manager [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state stopped and task_state None.#033[00m
Oct  2 08:32:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.346 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.376 2 DEBUG oslo_concurrency.lockutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.377 2 DEBUG oslo_concurrency.lockutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.377 2 DEBUG nova.network.neutron [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.377 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:44.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.997 2 DEBUG nova.compute.manager [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.997 2 DEBUG oslo_concurrency.lockutils [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.998 2 DEBUG oslo_concurrency.lockutils [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.998 2 DEBUG oslo_concurrency.lockutils [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:44 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.999 2 DEBUG nova.compute.manager [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:45 np0005466030 nova_compute[230518]: 2025-10-02 12:32:44.999 2 WARNING nova.compute.manager [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  2 08:32:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:46 np0005466030 nova_compute[230518]: 2025-10-02 12:32:46.910 2 DEBUG nova.network.neutron [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:32:46 np0005466030 nova_compute[230518]: 2025-10-02 12:32:46.942 2 DEBUG oslo_concurrency.lockutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:32:46 np0005466030 nova_compute[230518]: 2025-10-02 12:32:46.983 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.#033[00m
Oct  2 08:32:46 np0005466030 nova_compute[230518]: 2025-10-02 12:32:46.984 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.000 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.013 2 DEBUG nova.virt.libvirt.vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.014 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.016 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.016 2 DEBUG os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.028 2 INFO os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.036 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.040 2 WARNING nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.045 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.046 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.049 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.049 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.050 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.051 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.051 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.051 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.074 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.105 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.105 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.106 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.106 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.106 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4025293043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/899398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.515 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.556 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.560 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.749 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.750 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4532MB free_disk=20.845951080322266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.751 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.885 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.885 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.885 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:32:47 np0005466030 nova_compute[230518]: 2025-10-02 12:32:47.925 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:32:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:32:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/586986182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.098 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.102 2 DEBUG nova.virt.libvirt.vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.102 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.104 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.108 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.131 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <name>instance-0000004d</name>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:32:47</nova:creationTime>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:7b:e8:97"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <target dev="tapa3bd0009-d2"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:32:48 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:32:48 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:32:48 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:32:48 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.132 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.132 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.133 2 DEBUG nova.virt.libvirt.vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.134 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.134 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.135 2 DEBUG os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.142 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:48 np0005466030 NetworkManager[44960]: <info>  [1759408368.2016] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.206 2 INFO os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:32:48 np0005466030 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct  2 08:32:48 np0005466030 NetworkManager[44960]: <info>  [1759408368.2829] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:48Z|00342|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct  2 08:32:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:48Z|00343|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.294 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '7', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.295 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.297 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.308 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e730b237-c36f-49c3-906a-89092c4cd5a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.309 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.317 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f61f78b2-322c-4e4a-a721-5fc627405337]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:48Z|00344|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct  2 08:32:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:48Z|00345|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.319 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b632fe3d-1b7e-4a38-97e5-ce47d70c3540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.336 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ad134e1b-4bfc-4c0b-87b8-3a9c1dd77a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 systemd-machined[188247]: New machine qemu-40-instance-0000004d.
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.355 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93600119-0e37-41ab-ba03-fed435e58dd2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 systemd[1]: Started Virtual Machine qemu-40-instance-0000004d.
Oct  2 08:32:48 np0005466030 systemd-udevd[262798]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.392 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ce96bf55-0518-42cc-9928-d59f49522ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:32:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1878121547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:32:48 np0005466030 NetworkManager[44960]: <info>  [1759408368.3985] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:32:48 np0005466030 NetworkManager[44960]: <info>  [1759408368.3996] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.401 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb2f08e-4a3e-4d75-bbaa-d47eb77ef1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 systemd-udevd[262813]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:32:48 np0005466030 NetworkManager[44960]: <info>  [1759408368.4051] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.429 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.431 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[811a20cc-24a0-474e-92e4-e4419d9fdce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.434 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2aa7fd-c127-43db-88ef-1c8abee9bed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 podman[262764]: 2025-10-02 12:32:48.449999058 +0000 UTC m=+0.123319251 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:48 np0005466030 podman[262767]: 2025-10-02 12:32:48.451980481 +0000 UTC m=+0.116556719 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.452 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:32:48 np0005466030 NetworkManager[44960]: <info>  [1759408368.4615] device (tapf011efa4-00): carrier: link connected
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.470 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.475 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[45bbd6dd-e6b8-47c7-b593-648848960029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.494 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.494 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.498 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3990e7ca-8360-49d2-9391-9842d58dde60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623201, 'reachable_time': 33175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262846, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.514 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4d54b3a8-6e51-4a58-adde-934a8a64fd73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623201, 'tstamp': 623201}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262847, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.532 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd7de60-5813-4dc5-91a8-5f1c6539cac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623201, 'reachable_time': 33175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262848, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.563 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33f6e923-2dd2-4f72-ae53-f87c489587f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.626 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1efb22-ecf0-4ae8-8297-dee03cccb00b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.628 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.628 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.629 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:48 np0005466030 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 NetworkManager[44960]: <info>  [1759408368.6318] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.638 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:32:48Z|00346|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.642 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.644 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[df09561f-43af-45b2-b0aa-6e1bd92c517c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.645 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:32:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.646 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:32:48 np0005466030 nova_compute[230518]: 2025-10-02 12:32:48.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:49 np0005466030 podman[262922]: 2025-10-02 12:32:49.052462884 +0000 UTC m=+0.031410829 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:32:49 np0005466030 podman[262922]: 2025-10-02 12:32:49.22745818 +0000 UTC m=+0.206406105 container create 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:32:49 np0005466030 systemd[1]: Started libpod-conmon-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a.scope.
Oct  2 08:32:49 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:32:49 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bba23b9540567d83dd9a5ca70200357f97da436b9841f2d13c96266b6db2d85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:32:49 np0005466030 podman[262922]: 2025-10-02 12:32:49.404330825 +0000 UTC m=+0.383278710 container init 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:32:49 np0005466030 podman[262922]: 2025-10-02 12:32:49.416459557 +0000 UTC m=+0.395407482 container start 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:32:49 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : New worker (262943) forked
Oct  2 08:32:49 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : Loading success.
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.487 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.531 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.532 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408369.5309772, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.532 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.534 2 DEBUG nova.compute.manager [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.538 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance rebooted successfully.#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.538 2 DEBUG nova.compute.manager [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.560 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.565 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.598 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.599 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408369.5325024, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.599 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.623 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:49 np0005466030 nova_compute[230518]: 2025-10-02 12:32:49.627 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:50.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.545 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.545 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.545 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.546 2 WARNING nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.547 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.547 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:32:50 np0005466030 nova_compute[230518]: 2025-10-02 12:32:50.547 2 WARNING nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:32:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:50.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:51 np0005466030 nova_compute[230518]: 2025-10-02 12:32:51.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:51 np0005466030 nova_compute[230518]: 2025-10-02 12:32:51.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:32:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Oct  2 08:32:52 np0005466030 nova_compute[230518]: 2025-10-02 12:32:52.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:52 np0005466030 nova_compute[230518]: 2025-10-02 12:32:52.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:53 np0005466030 nova_compute[230518]: 2025-10-02 12:32:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:53 np0005466030 nova_compute[230518]: 2025-10-02 12:32:53.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:32:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:54.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.760 2 INFO nova.compute.manager [None req-66845643-3409-45b4-8845-b56973f06b48 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Pausing#033[00m
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.762 2 DEBUG nova.objects.instance [None req-66845643-3409-45b4-8845-b56973f06b48 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.797 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408374.79743, 3e490470-5e33-4140-95c1-367805364c73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.799 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.801 2 DEBUG nova.compute.manager [None req-66845643-3409-45b4-8845-b56973f06b48 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.839 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.845 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:54 np0005466030 nova_compute[230518]: 2025-10-02 12:32:54.872 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 08:32:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:56.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:56.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:32:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:32:56 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.003 2 INFO nova.compute.manager [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Unpausing#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.005 2 DEBUG nova.objects.instance [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.048 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408377.048426, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:32:57 np0005466030 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.049 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.052 2 DEBUG nova.virt.libvirt.guest [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.053 2 DEBUG nova.compute.manager [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.082 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.085 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.117 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct  2 08:32:57 np0005466030 nova_compute[230518]: 2025-10-02 12:32:57.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:58 np0005466030 nova_compute[230518]: 2025-10-02 12:32:58.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:32:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:32:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:58.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:32:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:32:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:32:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:58.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:32:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:00.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:00 np0005466030 podman[263084]: 2025-10-02 12:33:00.862742108 +0000 UTC m=+0.099104570 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 08:33:00 np0005466030 podman[263085]: 2025-10-02 12:33:00.866981891 +0000 UTC m=+0.097968744 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  2 08:33:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Oct  2 08:33:01 np0005466030 nova_compute[230518]: 2025-10-02 12:33:01.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:01 np0005466030 nova_compute[230518]: 2025-10-02 12:33:01.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:33:01 np0005466030 nova_compute[230518]: 2025-10-02 12:33:01.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:33:01 np0005466030 nova_compute[230518]: 2025-10-02 12:33:01.254 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:01 np0005466030 nova_compute[230518]: 2025-10-02 12:33:01.255 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:01 np0005466030 nova_compute[230518]: 2025-10-02 12:33:01.256 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:33:01 np0005466030 nova_compute[230518]: 2025-10-02 12:33:01.256 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:02.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:02 np0005466030 nova_compute[230518]: 2025-10-02 12:33:02.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:03 np0005466030 nova_compute[230518]: 2025-10-02 12:33:03.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:03 np0005466030 nova_compute[230518]: 2025-10-02 12:33:03.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:03.391 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:03.393 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:33:03 np0005466030 nova_compute[230518]: 2025-10-02 12:33:03.615 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:03 np0005466030 nova_compute[230518]: 2025-10-02 12:33:03.636 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:03 np0005466030 nova_compute[230518]: 2025-10-02 12:33:03.637 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:33:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:04.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:33:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:33:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:05Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:33:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:06.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:06.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:07 np0005466030 nova_compute[230518]: 2025-10-02 12:33:07.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:08 np0005466030 nova_compute[230518]: 2025-10-02 12:33:08.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:08.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:08.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:12 np0005466030 nova_compute[230518]: 2025-10-02 12:33:12.367 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:12 np0005466030 nova_compute[230518]: 2025-10-02 12:33:12.368 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:12 np0005466030 nova_compute[230518]: 2025-10-02 12:33:12.368 2 INFO nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Rebooting instance#033[00m
Oct  2 08:33:12 np0005466030 nova_compute[230518]: 2025-10-02 12:33:12.381 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:33:12 np0005466030 nova_compute[230518]: 2025-10-02 12:33:12.382 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:33:12 np0005466030 nova_compute[230518]: 2025-10-02 12:33:12.382 2 DEBUG nova.network.neutron [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:33:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:12.396 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:12.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:12 np0005466030 nova_compute[230518]: 2025-10-02 12:33:12.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:13 np0005466030 nova_compute[230518]: 2025-10-02 12:33:13.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:14.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:15 np0005466030 nova_compute[230518]: 2025-10-02 12:33:15.537 2 DEBUG nova.network.neutron [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:33:15 np0005466030 nova_compute[230518]: 2025-10-02 12:33:15.575 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:33:15 np0005466030 nova_compute[230518]: 2025-10-02 12:33:15.578 2 DEBUG nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:16 np0005466030 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct  2 08:33:16 np0005466030 NetworkManager[44960]: <info>  [1759408396.5816] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:33:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:16Z|00347|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct  2 08:33:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:16Z|00348|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:16Z|00349|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:16 np0005466030 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct  2 08:33:16 np0005466030 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000004d.scope: Consumed 14.442s CPU time.
Oct  2 08:33:16 np0005466030 systemd-machined[188247]: Machine qemu-40-instance-0000004d terminated.
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.746 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.747 2 DEBUG nova.objects.instance [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.948 2 DEBUG nova.virt.libvirt.vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.949 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.950 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.951 2 DEBUG os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.954 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.962 2 INFO os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.973 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.978 2 WARNING nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.986 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.988 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.991 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.992 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.994 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.994 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.995 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.996 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.996 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.996 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.997 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.997 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.998 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.998 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:33:16 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.999 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:16.999 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.000 2 DEBUG nova.objects.instance [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.000 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.002 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:33:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.005 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:33:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.007 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1fffd78c-f85d-4a47-9dd9-c1f5b5580093]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.007 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.063 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:17 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : haproxy version is 2.8.14-c23fe91
Oct  2 08:33:17 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : path to executable is /usr/sbin/haproxy
Oct  2 08:33:17 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [WARNING]  (262941) : Exiting Master process...
Oct  2 08:33:17 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [ALERT]    (262941) : Current worker (262943) exited with code 143 (Terminated)
Oct  2 08:33:17 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [WARNING]  (262941) : All workers exited. Exiting... (0)
Oct  2 08:33:17 np0005466030 systemd[1]: libpod-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a.scope: Deactivated successfully.
Oct  2 08:33:17 np0005466030 podman[263210]: 2025-10-02 12:33:17.478002172 +0000 UTC m=+0.330076136 container died 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:33:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4171719287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.555 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.598 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7bba23b9540567d83dd9a5ca70200357f97da436b9841f2d13c96266b6db2d85-merged.mount: Deactivated successfully.
Oct  2 08:33:17 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a-userdata-shm.mount: Deactivated successfully.
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.952 2 DEBUG nova.compute.manager [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.953 2 DEBUG oslo_concurrency.lockutils [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.953 2 DEBUG oslo_concurrency.lockutils [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.954 2 DEBUG oslo_concurrency.lockutils [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.954 2 DEBUG nova.compute.manager [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:17 np0005466030 nova_compute[230518]: 2025-10-02 12:33:17.954 2 WARNING nova.compute.manager [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  2 08:33:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:33:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3641656552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.136 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.137 2 DEBUG nova.virt.libvirt.vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.138 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.138 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.140 2 DEBUG nova.objects.instance [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.157 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <name>instance-0000004d</name>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:33:16</nova:creationTime>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:7b:e8:97"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <target dev="tapa3bd0009-d2"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:33:18 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:33:18 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:33:18 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:33:18 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.159 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.160 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.160 2 DEBUG nova.virt.libvirt.vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.161 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.161 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.161 2 DEBUG os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466030 NetworkManager[44960]: <info>  [1759408398.1684] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.176 2 INFO os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:33:18 np0005466030 NetworkManager[44960]: <info>  [1759408398.3119] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Oct  2 08:33:18 np0005466030 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct  2 08:33:18 np0005466030 systemd-udevd[263177]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:18Z|00350|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct  2 08:33:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:18Z|00351|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:33:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:18.334 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:33:18 np0005466030 NetworkManager[44960]: <info>  [1759408398.3371] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:33:18 np0005466030 NetworkManager[44960]: <info>  [1759408398.3384] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:33:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:18Z|00352|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct  2 08:33:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:18Z|00353|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466030 nova_compute[230518]: 2025-10-02 12:33:18.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:18 np0005466030 systemd-machined[188247]: New machine qemu-41-instance-0000004d.
Oct  2 08:33:18 np0005466030 systemd[1]: Started Virtual Machine qemu-41-instance-0000004d.
Oct  2 08:33:18 np0005466030 podman[263210]: 2025-10-02 12:33:18.399608679 +0000 UTC m=+1.251682693 container cleanup 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:33:18 np0005466030 systemd[1]: libpod-conmon-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a.scope: Deactivated successfully.
Oct  2 08:33:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:18.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:19 np0005466030 podman[263318]: 2025-10-02 12:33:19.40903645 +0000 UTC m=+0.978605872 container remove 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.417 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1831cb-73cb-4501-828d-c6cdf647fbc2]: (4, ('Thu Oct  2 12:33:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a)\n1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a\nThu Oct  2 12:33:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a)\n1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.419 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81190a67-22bc-43bb-a2e4-58c4cbd8847a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.421 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466030 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.457 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[39ea3800-de0f-4115-9a8a-e98781117429]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.477 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f5dfa3-6dfd-49ad-8f00-e4926cdcafd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.478 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f595ac1-30e5-4c5d-a168-040de695a8c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.495 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a587ec51-c488-476a-b4e6-73a1c8223e8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623194, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263413, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.498 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.498 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[924ea14a-1013-4aeb-9503-1735b16d38b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.499 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.501 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:33:19 np0005466030 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:33:19 np0005466030 podman[263338]: 2025-10-02 12:33:19.502889293 +0000 UTC m=+0.738364083 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.524 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[430f9acd-3fcf-4304-955c-431ca7fd3900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.526 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.528 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.528 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4aec8d15-11e9-49ad-aef9-26fe3213f811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.530 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f02c9cf3-066d-4498-ab50-2598a185d6bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.545 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4515c52b-1bf0-44c3-a2e9-25575b836664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.559 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4124e9-95a8-4b43-93da-5ae178506d8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 podman[263337]: 2025-10-02 12:33:19.561375073 +0000 UTC m=+0.808569582 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.591 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[22e02eb1-bf9b-4146-9279-08352b85481e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.602 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7113380-dc7e-4cf9-a39a-5dfbb3ad967d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 NetworkManager[44960]: <info>  [1759408399.6071] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Oct  2 08:33:19 np0005466030 systemd-udevd[263432]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.635 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[456c4eb2-911c-4fa2-8e14-09da8886b26a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.638 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d79476ae-9ce2-4219-8aa6-4668a8aebce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 NetworkManager[44960]: <info>  [1759408399.6637] device (tapf011efa4-00): carrier: link connected
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.667 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf92330-926e-4793-a283-604f7016d47a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47362246-4cab-431c-84a1-f6e6e4838eb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 34055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263451, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.696 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e18f6c9f-57c3-4cff-ae78-4a7b31cc0949]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626322, 'tstamp': 626322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263452, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.714 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2beaed5-e74d-4c8a-aff6-3ebb3477ceb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 34055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263453, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.746 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60c39a09-53ce-4624-9237-63b5c4a8a385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.811 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a44eaec1-dffd-452c-a72e-d260f6b95dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:19 np0005466030 NetworkManager[44960]: <info>  [1759408399.8161] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct  2 08:33:19 np0005466030 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.823 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:19Z|00354|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.827 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.828 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408399.8276675, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.828 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.830 2 DEBUG nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.832 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.834 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance rebooted successfully.#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.834 2 DEBUG nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.834 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbff876-2342-4ebe-b273-c306be9233c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.835 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:33:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.835 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.877 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.883 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.934 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.937 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408399.8282464, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.938 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.961 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:33:19 np0005466030 nova_compute[230518]: 2025-10-02 12:33:19.965 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.107 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.109 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.109 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.110 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.111 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.111 2 WARNING nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.112 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.112 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.113 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.113 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.114 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.115 2 WARNING nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.115 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.116 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.116 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.117 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.118 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:33:20 np0005466030 nova_compute[230518]: 2025-10-02 12:33:20.118 2 WARNING nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:33:20 np0005466030 podman[263484]: 2025-10-02 12:33:20.161450113 +0000 UTC m=+0.021605681 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:33:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:20.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:20 np0005466030 podman[263484]: 2025-10-02 12:33:20.890472501 +0000 UTC m=+0.750628049 container create 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:33:21 np0005466030 systemd[1]: Started libpod-conmon-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515.scope.
Oct  2 08:33:21 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:33:21 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed56eda4117edd4a73c69ba3e2c0e2d327a36dd0522d2b795d433383050ab310/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:33:21 np0005466030 podman[263484]: 2025-10-02 12:33:21.480182936 +0000 UTC m=+1.340338564 container init 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:33:21 np0005466030 podman[263484]: 2025-10-02 12:33:21.490416358 +0000 UTC m=+1.350571946 container start 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:33:21 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : New worker (263505) forked
Oct  2 08:33:21 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : Loading success.
Oct  2 08:33:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:22.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:22.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:22 np0005466030 nova_compute[230518]: 2025-10-02 12:33:22.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:23 np0005466030 nova_compute[230518]: 2025-10-02 12:33:23.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:24.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:25.930 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:33:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:26.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:26.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:27 np0005466030 nova_compute[230518]: 2025-10-02 12:33:27.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:28 np0005466030 nova_compute[230518]: 2025-10-02 12:33:28.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:28.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:30.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:31 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:31Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:33:31 np0005466030 podman[263514]: 2025-10-02 12:33:31.848102807 +0000 UTC m=+0.086600726 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:33:31 np0005466030 podman[263515]: 2025-10-02 12:33:31.848109237 +0000 UTC m=+0.082588220 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:33:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:32 np0005466030 nova_compute[230518]: 2025-10-02 12:33:32.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:33 np0005466030 nova_compute[230518]: 2025-10-02 12:33:33.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:34.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:37 np0005466030 ovn_controller[129257]: 2025-10-02T12:33:37Z|00355|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:33:37 np0005466030 nova_compute[230518]: 2025-10-02 12:33:37.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:37 np0005466030 nova_compute[230518]: 2025-10-02 12:33:37.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:38 np0005466030 nova_compute[230518]: 2025-10-02 12:33:38.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:38.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:40.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:42.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:42 np0005466030 nova_compute[230518]: 2025-10-02 12:33:42.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:43 np0005466030 nova_compute[230518]: 2025-10-02 12:33:43.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:44.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:44.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:46.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.097 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.097 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/84496681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.535 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.653 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.653 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.853 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.854 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4352MB free_disk=20.818740844726562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.855 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.855 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.929 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:33:47 np0005466030 nova_compute[230518]: 2025-10-02 12:33:47.983 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:33:48 np0005466030 nova_compute[230518]: 2025-10-02 12:33:48.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:33:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2927055532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:33:48 np0005466030 nova_compute[230518]: 2025-10-02 12:33:48.466 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:33:48 np0005466030 nova_compute[230518]: 2025-10-02 12:33:48.475 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:33:48 np0005466030 nova_compute[230518]: 2025-10-02 12:33:48.501 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:33:48 np0005466030 nova_compute[230518]: 2025-10-02 12:33:48.552 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:33:48 np0005466030 nova_compute[230518]: 2025-10-02 12:33:48.553 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:33:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:48.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:49 np0005466030 podman[263600]: 2025-10-02 12:33:49.850596796 +0000 UTC m=+0.091781689 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:33:49 np0005466030 podman[263599]: 2025-10-02 12:33:49.872234106 +0000 UTC m=+0.117366624 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:33:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:51 np0005466030 nova_compute[230518]: 2025-10-02 12:33:51.550 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:51 np0005466030 nova_compute[230518]: 2025-10-02 12:33:51.550 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:51 np0005466030 nova_compute[230518]: 2025-10-02 12:33:51.551 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:51 np0005466030 nova_compute[230518]: 2025-10-02 12:33:51.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:52 np0005466030 nova_compute[230518]: 2025-10-02 12:33:52.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:52 np0005466030 nova_compute[230518]: 2025-10-02 12:33:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:33:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:52.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:52.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:52 np0005466030 nova_compute[230518]: 2025-10-02 12:33:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:53 np0005466030 nova_compute[230518]: 2025-10-02 12:33:53.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:54 np0005466030 nova_compute[230518]: 2025-10-02 12:33:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:54 np0005466030 nova_compute[230518]: 2025-10-02 12:33:54.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:54.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:54.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:33:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:33:57 np0005466030 nova_compute[230518]: 2025-10-02 12:33:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:57 np0005466030 nova_compute[230518]: 2025-10-02 12:33:57.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:58 np0005466030 nova_compute[230518]: 2025-10-02 12:33:58.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:33:58 np0005466030 nova_compute[230518]: 2025-10-02 12:33:58.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:33:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:33:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:33:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:33:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.657 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.657 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.694 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.805 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.806 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.818 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.818 2 INFO nova.compute.claims [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:33:59 np0005466030 nova_compute[230518]: 2025-10-02 12:33:59.981 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:34:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:34:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:34:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:34:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1544477566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.417 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.424 2 DEBUG nova.compute.provider_tree [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.444 2 DEBUG nova.scheduler.client.report [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.472 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.473 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.530 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.531 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.560 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.603 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:34:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:00.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.720 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.721 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.722 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating image(s)#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.747 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.777 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.807 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.812 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.893 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.894 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.895 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.895 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.924 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:00 np0005466030 nova_compute[230518]: 2025-10-02 12:34:00.929 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.563 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.591 2 DEBUG nova.policy [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71d69bc37f274fad8a0b06c0b96f2a64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.625 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.836 2 DEBUG nova.objects.instance [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.857 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.858 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Ensure instance console log exists: /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.858 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.859 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:01 np0005466030 nova_compute[230518]: 2025-10-02 12:34:01.859 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:02.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:02 np0005466030 podman[263832]: 2025-10-02 12:34:02.833399101 +0000 UTC m=+0.074727362 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:34:02 np0005466030 podman[263831]: 2025-10-02 12:34:02.833768672 +0000 UTC m=+0.075190296 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 08:34:02 np0005466030 nova_compute[230518]: 2025-10-02 12:34:02.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.076 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Successfully created port: 21510dd4-b155-46ed-bdb2-dc17a9149353 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.225 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.226 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:03 np0005466030 nova_compute[230518]: 2025-10-02 12:34:03.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:04.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:04.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.019 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Successfully updated port: 21510dd4-b155-46ed-bdb2-dc17a9149353 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.035 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.036 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.036 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:05.064 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:05.065 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:34:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:34:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:34:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:34:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.149 2 DEBUG nova.compute.manager [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-changed-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.149 2 DEBUG nova.compute.manager [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Refreshing instance network info cache due to event network-changed-21510dd4-b155-46ed-bdb2-dc17a9149353. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.150 2 DEBUG oslo_concurrency.lockutils [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.253 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.342 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.380 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:05 np0005466030 nova_compute[230518]: 2025-10-02 12:34:05.381 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.589 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updating instance_info_cache with network_info: [{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.615 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.616 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance network_info: |[{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.616 2 DEBUG oslo_concurrency.lockutils [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.617 2 DEBUG nova.network.neutron [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Refreshing network info cache for port 21510dd4-b155-46ed-bdb2-dc17a9149353 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.621 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start _get_guest_xml network_info=[{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.626 2 WARNING nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.631 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.632 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.638 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.639 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.640 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.641 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.641 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.642 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.642 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.642 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.643 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.643 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.643 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.644 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.644 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.644 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:34:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:06.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:06 np0005466030 nova_compute[230518]: 2025-10-02 12:34:06.648 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:06.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/364482634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.059 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.096 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.100 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:07Z|00356|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:34:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2879726199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.778 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.780 2 DEBUG nova.virt.libvirt.vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-tempest.common.compute-instance-456092929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:00Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.782 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.784 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.786 2 DEBUG nova.objects.instance [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.807 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <uuid>418e3157-f0a7-42ec-812b-2a4a2ad00991</uuid>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <name>instance-00000054</name>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <nova:name>tempest-tempest.common.compute-instance-456092929</nova:name>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:34:06</nova:creationTime>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <nova:port uuid="21510dd4-b155-46ed-bdb2-dc17a9149353">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <entry name="serial">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <entry name="uuid">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:44:7a:ec"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <target dev="tap21510dd4-b1"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log" append="off"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:34:07 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:34:07 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:34:07 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:34:07 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.808 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Preparing to wait for external event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.808 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.809 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.809 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.809 2 DEBUG nova.virt.libvirt.vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-tempest.common.compute-instance-456092929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:00Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.810 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.810 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.810 2 DEBUG os_vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.811 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.811 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21510dd4-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21510dd4-b1, col_values=(('external_ids', {'iface-id': '21510dd4-b155-46ed-bdb2-dc17a9149353', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:7a:ec', 'vm-uuid': '418e3157-f0a7-42ec-812b-2a4a2ad00991'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466030 NetworkManager[44960]: <info>  [1759408447.8210] manager: (tap21510dd4-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.833 2 INFO os_vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.899 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.900 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.900 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:44:7a:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.900 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Using config drive#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.937 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.947 2 DEBUG nova.network.neutron [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updated VIF entry in instance network info cache for port 21510dd4-b155-46ed-bdb2-dc17a9149353. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.948 2 DEBUG nova.network.neutron [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updating instance_info_cache with network_info: [{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:07 np0005466030 nova_compute[230518]: 2025-10-02 12:34:07.965 2 DEBUG oslo_concurrency.lockutils [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:08.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:08.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:08 np0005466030 nova_compute[230518]: 2025-10-02 12:34:08.727 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating config drive at /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config#033[00m
Oct  2 08:34:08 np0005466030 nova_compute[230518]: 2025-10-02 12:34:08.737 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_fqho6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:34:08 np0005466030 nova_compute[230518]: 2025-10-02 12:34:08.885 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_fqho6z" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:08 np0005466030 nova_compute[230518]: 2025-10-02 12:34:08.914 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:08 np0005466030 nova_compute[230518]: 2025-10-02 12:34:08.918 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:09.067 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:10 np0005466030 nova_compute[230518]: 2025-10-02 12:34:10.000 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:10 np0005466030 nova_compute[230518]: 2025-10-02 12:34:10.001 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting local config drive /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config because it was imported into RBD.#033[00m
Oct  2 08:34:10 np0005466030 kernel: tap21510dd4-b1: entered promiscuous mode
Oct  2 08:34:10 np0005466030 NetworkManager[44960]: <info>  [1759408450.0440] manager: (tap21510dd4-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Oct  2 08:34:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:10Z|00357|binding|INFO|Claiming lport 21510dd4-b155-46ed-bdb2-dc17a9149353 for this chassis.
Oct  2 08:34:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:10Z|00358|binding|INFO|21510dd4-b155-46ed-bdb2-dc17a9149353: Claiming fa:16:3e:44:7a:ec 10.100.0.11
Oct  2 08:34:10 np0005466030 nova_compute[230518]: 2025-10-02 12:34:10.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.052 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.054 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:34:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:10Z|00359|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 ovn-installed in OVS
Oct  2 08:34:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:10Z|00360|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 up in Southbound
Oct  2 08:34:10 np0005466030 nova_compute[230518]: 2025-10-02 12:34:10.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:10 np0005466030 nova_compute[230518]: 2025-10-02 12:34:10.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.070 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12108672-a353-4604-83e1-a2aea4290b6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:10 np0005466030 systemd-machined[188247]: New machine qemu-42-instance-00000054.
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.101 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea04dd3-e87e-4146-ba21-2b4bc85eff6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:10 np0005466030 systemd[1]: Started Virtual Machine qemu-42-instance-00000054.
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.104 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6438a0a-0228-4095-b54a-344f70299e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:10 np0005466030 systemd-udevd[264141]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.129 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[23a29938-6912-4807-8310-cf40766c3bae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:10 np0005466030 NetworkManager[44960]: <info>  [1759408450.1337] device (tap21510dd4-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:10 np0005466030 NetworkManager[44960]: <info>  [1759408450.1350] device (tap21510dd4-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.152 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a03a64e6-39d4-4bc8-bfbf-47dc20769ac5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264144, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.168 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e156c51a-fc82-4627-9ceb-202698c6a49f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264150, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264150, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.169 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:10 np0005466030 nova_compute[230518]: 2025-10-02 12:34:10.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.175 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:10 np0005466030 nova_compute[230518]: 2025-10-02 12:34:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.175 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.175 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.176 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:10.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.097 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408451.09674, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.098 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.122 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.126 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408451.096857, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.127 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.141 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.144 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.163 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.830 2 DEBUG nova.compute.manager [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG nova.compute.manager [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Processing event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.832 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.835 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408451.8355336, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.836 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.838 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.840 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance spawned successfully.#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.841 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.861 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.865 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.873 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.873 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.874 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.874 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.875 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.875 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.898 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.936 2 INFO nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 11.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:34:11 np0005466030 nova_compute[230518]: 2025-10-02 12:34:11.937 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:12 np0005466030 nova_compute[230518]: 2025-10-02 12:34:12.035 2 INFO nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 12.27 seconds to build instance.#033[00m
Oct  2 08:34:12 np0005466030 nova_compute[230518]: 2025-10-02 12:34:12.057 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:12.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:12 np0005466030 nova_compute[230518]: 2025-10-02 12:34:12.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:12 np0005466030 nova_compute[230518]: 2025-10-02 12:34:12.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:13 np0005466030 nova_compute[230518]: 2025-10-02 12:34:13.936 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:13 np0005466030 nova_compute[230518]: 2025-10-02 12:34:13.937 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:13 np0005466030 nova_compute[230518]: 2025-10-02 12:34:13.937 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:13 np0005466030 nova_compute[230518]: 2025-10-02 12:34:13.938 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:13 np0005466030 nova_compute[230518]: 2025-10-02 12:34:13.938 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:13 np0005466030 nova_compute[230518]: 2025-10-02 12:34:13.939 2 WARNING nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:34:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:15 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:15Z|00361|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:34:15 np0005466030 nova_compute[230518]: 2025-10-02 12:34:15.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:15 np0005466030 nova_compute[230518]: 2025-10-02 12:34:15.735 2 INFO nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Rebuilding instance#033[00m
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.805634) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455805667, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2400, "num_deletes": 253, "total_data_size": 5666634, "memory_usage": 5747856, "flush_reason": "Manual Compaction"}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455823238, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3718074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39082, "largest_seqno": 41477, "table_properties": {"data_size": 3708324, "index_size": 6116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20779, "raw_average_key_size": 20, "raw_value_size": 3688661, "raw_average_value_size": 3670, "num_data_blocks": 266, "num_entries": 1005, "num_filter_entries": 1005, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408249, "oldest_key_time": 1759408249, "file_creation_time": 1759408455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17742 microseconds, and 6678 cpu microseconds.
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.823375) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3718074 bytes OK
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.823417) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.828014) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.828039) EVENT_LOG_v1 {"time_micros": 1759408455828033, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.828057) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5655917, prev total WAL file size 5655917, number of live WAL files 2.
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.829477) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(3630KB)], [75(9949KB)]
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455829502, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13906052, "oldest_snapshot_seqno": -1}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6664 keys, 11949846 bytes, temperature: kUnknown
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455888435, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 11949846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11902638, "index_size": 29432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 170777, "raw_average_key_size": 25, "raw_value_size": 11780671, "raw_average_value_size": 1767, "num_data_blocks": 1177, "num_entries": 6664, "num_filter_entries": 6664, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.888666) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11949846 bytes
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.892410) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.6 rd, 202.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7188, records dropped: 524 output_compression: NoCompression
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.892426) EVENT_LOG_v1 {"time_micros": 1759408455892418, "job": 46, "event": "compaction_finished", "compaction_time_micros": 59020, "compaction_time_cpu_micros": 25072, "output_level": 6, "num_output_files": 1, "total_output_size": 11949846, "num_input_records": 7188, "num_output_records": 6664, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455893118, "job": 46, "event": "table_file_deletion", "file_number": 77}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455894860, "job": 46, "event": "table_file_deletion", "file_number": 75}
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.829414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:34:15 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:15Z|00362|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:34:15 np0005466030 nova_compute[230518]: 2025-10-02 12:34:15.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.058 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.072 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.117 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.133 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.143 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.154 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.164 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:34:16 np0005466030 nova_compute[230518]: 2025-10-02 12:34:16.167 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:34:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:16.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:34:17 np0005466030 nova_compute[230518]: 2025-10-02 12:34:17.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:17 np0005466030 nova_compute[230518]: 2025-10-02 12:34:17.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:18.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:18.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:34:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:20.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:34:20 np0005466030 podman[264245]: 2025-10-02 12:34:20.820603455 +0000 UTC m=+0.064198661 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  2 08:34:20 np0005466030 podman[264244]: 2025-10-02 12:34:20.850858607 +0000 UTC m=+0.097588191 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:34:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:22.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:22 np0005466030 nova_compute[230518]: 2025-10-02 12:34:22.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:22 np0005466030 nova_compute[230518]: 2025-10-02 12:34:22.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:23 np0005466030 nova_compute[230518]: 2025-10-02 12:34:23.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:24Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:7a:ec 10.100.0.11
Oct  2 08:34:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:24Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:7a:ec 10.100.0.11
Oct  2 08:34:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:24.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:24.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:26 np0005466030 nova_compute[230518]: 2025-10-02 12:34:26.209 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:34:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:26.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:26.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:27 np0005466030 nova_compute[230518]: 2025-10-02 12:34:27.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:27 np0005466030 nova_compute[230518]: 2025-10-02 12:34:27.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:27 np0005466030 nova_compute[230518]: 2025-10-02 12:34:27.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:28 np0005466030 kernel: tap21510dd4-b1 (unregistering): left promiscuous mode
Oct  2 08:34:28 np0005466030 NetworkManager[44960]: <info>  [1759408468.5789] device (tap21510dd4-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:34:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:28Z|00363|binding|INFO|Releasing lport 21510dd4-b155-46ed-bdb2-dc17a9149353 from this chassis (sb_readonly=0)
Oct  2 08:34:28 np0005466030 nova_compute[230518]: 2025-10-02 12:34:28.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:28Z|00364|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 down in Southbound
Oct  2 08:34:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:28Z|00365|binding|INFO|Removing iface tap21510dd4-b1 ovn-installed in OVS
Oct  2 08:34:28 np0005466030 nova_compute[230518]: 2025-10-02 12:34:28.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.615 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.616 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:34:28 np0005466030 nova_compute[230518]: 2025-10-02 12:34:28.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.618 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:34:28 np0005466030 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct  2 08:34:28 np0005466030 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000054.scope: Consumed 13.477s CPU time.
Oct  2 08:34:28 np0005466030 systemd-machined[188247]: Machine qemu-42-instance-00000054 terminated.
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.647 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefaa27-ff6c-4f03-a3db-ae1b34164888]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:28.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.687 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[03810d03-c3a3-4ed1-8fc4-07935f2fdd0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.690 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb049a7-599f-4c7b-880a-1951d50cf2e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:28.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.729 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb28950-5779-4243-bd41-487c653ccf8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.756 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a25bdc7a-982f-4cd5-abc8-33e8f8647604]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264298, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8cccc2-0880-4163-9429-cc65974c4a64]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264299, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264299, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.783 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:28 np0005466030 nova_compute[230518]: 2025-10-02 12:34:28.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:28 np0005466030 nova_compute[230518]: 2025-10-02 12:34:28.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:29Z|00366|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.224 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.231 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance destroyed successfully.#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.243 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance destroyed successfully.#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.244 2 DEBUG nova.virt.libvirt.vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:15Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.244 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.245 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.245 2 DEBUG os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21510dd4-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.280 2 INFO os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')#033[00m
Oct  2 08:34:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:29 np0005466030 nova_compute[230518]: 2025-10-02 12:34:29.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:30.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.381 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting instance files /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.382 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deletion of /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del complete#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.565 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.566 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating image(s)#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.597 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.629 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.658 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.663 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.734 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.735 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.737 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.737 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.773 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.777 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.924 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:31 np0005466030 nova_compute[230518]: 2025-10-02 12:34:31.925 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.035 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.102 2 DEBUG nova.compute.manager [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.103 2 DEBUG oslo_concurrency.lockutils [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.103 2 DEBUG oslo_concurrency.lockutils [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.104 2 DEBUG oslo_concurrency.lockutils [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.104 2 DEBUG nova.compute.manager [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.104 2 WARNING nova.compute.manager [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.141 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.141 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.148 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.149 2 INFO nova.compute.claims [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.255 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.318 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.344 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.432 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.432 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Ensure instance console log exists: /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.433 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.433 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.433 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.435 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start _get_guest_xml network_info=[{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.443 2 WARNING nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.450 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.450 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.454 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.454 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.455 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.455 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.456 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.456 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.456 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.476 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.005999946s ======
Oct  2 08:34:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:32.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005999946s
Oct  2 08:34:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:32.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.796 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.804 2 DEBUG nova.compute.provider_tree [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.843 2 DEBUG nova.scheduler.client.report [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.910 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.910 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.938 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.966 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:32 np0005466030 nova_compute[230518]: 2025-10-02 12:34:32.971 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.009 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.009 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.040 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.059 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.155 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.158 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.158 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Creating image(s)#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.203 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.253 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.288 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.294 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.344 2 DEBUG nova.policy [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34a9da53e0cc446593d0cea2f498c53e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:34:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4020740269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.392 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.393 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.394 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.395 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.431 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.437 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.475 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.479 2 DEBUG nova.virt.libvirt.vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:31Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.480 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.481 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.484 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <uuid>418e3157-f0a7-42ec-812b-2a4a2ad00991</uuid>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <name>instance-00000054</name>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestJSON-server-1986633563</nova:name>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:34:32</nova:creationTime>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <nova:port uuid="21510dd4-b155-46ed-bdb2-dc17a9149353">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <entry name="serial">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <entry name="uuid">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:44:7a:ec"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <target dev="tap21510dd4-b1"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log" append="off"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:34:33 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:34:33 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:34:33 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:34:33 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.486 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Preparing to wait for external event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.486 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.487 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.487 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.488 2 DEBUG nova.virt.libvirt.vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:31Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.488 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.489 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.489 2 DEBUG os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21510dd4-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21510dd4-b1, col_values=(('external_ids', {'iface-id': '21510dd4-b155-46ed-bdb2-dc17a9149353', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:7a:ec', 'vm-uuid': '418e3157-f0a7-42ec-812b-2a4a2ad00991'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:33 np0005466030 NetworkManager[44960]: <info>  [1759408473.4998] manager: (tap21510dd4-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.508 2 INFO os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')#033[00m
Oct  2 08:34:33 np0005466030 podman[264676]: 2025-10-02 12:34:33.630359247 +0000 UTC m=+0.066918836 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:34:33 np0005466030 podman[264675]: 2025-10-02 12:34:33.630334696 +0000 UTC m=+0.075405853 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.798 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.798 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.799 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:44:7a:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.799 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Using config drive#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.834 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.852 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:33 np0005466030 nova_compute[230518]: 2025-10-02 12:34:33.883 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'keypairs' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.636 2 DEBUG nova.compute.manager [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.637 2 DEBUG oslo_concurrency.lockutils [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.638 2 DEBUG oslo_concurrency.lockutils [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.638 2 DEBUG oslo_concurrency.lockutils [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.639 2 DEBUG nova.compute.manager [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Processing event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:34:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:34.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:34.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.770 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating config drive at /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.780 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi742ghzt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.941 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi742ghzt" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.979 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:34 np0005466030 nova_compute[230518]: 2025-10-02 12:34:34.985 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.028 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.112 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] resizing rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.597 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Successfully created port: bace8310-2635-4b16-b54b-7b961bf7c42a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.756 2 DEBUG nova.objects.instance [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'migration_context' on Instance uuid 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.773 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.774 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Ensure instance console log exists: /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.775 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.776 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:35 np0005466030 nova_compute[230518]: 2025-10-02 12:34:35.776 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.068 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.069 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting local config drive /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config because it was imported into RBD.#033[00m
Oct  2 08:34:36 np0005466030 kernel: tap21510dd4-b1: entered promiscuous mode
Oct  2 08:34:36 np0005466030 NetworkManager[44960]: <info>  [1759408476.1324] manager: (tap21510dd4-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Oct  2 08:34:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:36Z|00367|binding|INFO|Claiming lport 21510dd4-b155-46ed-bdb2-dc17a9149353 for this chassis.
Oct  2 08:34:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:36Z|00368|binding|INFO|21510dd4-b155-46ed-bdb2-dc17a9149353: Claiming fa:16:3e:44:7a:ec 10.100.0.11
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:36Z|00369|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 ovn-installed in OVS
Oct  2 08:34:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:36Z|00370|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 up in Southbound
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.152 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '5', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.155 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.158 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:34:36 np0005466030 systemd-udevd[264859]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:36 np0005466030 systemd-machined[188247]: New machine qemu-43-instance-00000054.
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.178 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bff68d40-a01a-4d8a-9aa5-a20fe141f327]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:36 np0005466030 NetworkManager[44960]: <info>  [1759408476.1829] device (tap21510dd4-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:36 np0005466030 NetworkManager[44960]: <info>  [1759408476.1835] device (tap21510dd4-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:36 np0005466030 systemd[1]: Started Virtual Machine qemu-43-instance-00000054.
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.215 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fe335a14-a8ca-4318-8ec4-4d8f5b5ee4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.221 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[87949f7f-8f16-4fb5-b9b2-580389e5d909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.248 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3567e327-f5c3-4600-9959-169ff82ad91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.273 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3924ee01-e979-4614-a642-7d576bc8989d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264870, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.292 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d7dbcc-8d59-478d-b30e-fe15c4361bfc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264873, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264873, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.293 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.297 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.297 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.298 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.298 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:36.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.924 2 DEBUG nova.compute.manager [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.924 2 DEBUG oslo_concurrency.lockutils [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.924 2 DEBUG oslo_concurrency.lockutils [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.925 2 DEBUG oslo_concurrency.lockutils [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.925 2 DEBUG nova.compute.manager [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:36 np0005466030 nova_compute[230518]: 2025-10-02 12:34:36.925 2 WARNING nova.compute.manager [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.150 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 418e3157-f0a7-42ec-812b-2a4a2ad00991 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.150 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408477.1500473, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.151 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.152 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.154 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.157 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance spawned successfully.#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.158 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.179 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.184 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.203 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.204 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.204 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.204 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.205 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.205 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.211 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.211 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408477.1508596, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.211 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.272 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.275 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408477.1542885, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.275 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.337 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.342 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.350 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.396 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.458 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.458 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.458 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.575 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:37 np0005466030 nova_compute[230518]: 2025-10-02 12:34:37.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:38 np0005466030 nova_compute[230518]: 2025-10-02 12:34:38.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:38.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:38.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.160 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Successfully updated port: bace8310-2635-4b16-b54b-7b961bf7c42a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.206 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.206 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquired lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.206 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:34:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.302 2 DEBUG nova.compute.manager [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.303 2 DEBUG oslo_concurrency.lockutils [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.303 2 DEBUG oslo_concurrency.lockutils [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.303 2 DEBUG oslo_concurrency.lockutils [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.304 2 DEBUG nova.compute.manager [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.304 2 WARNING nova.compute.manager [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.423 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.718 2 DEBUG nova.compute.manager [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-changed-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.719 2 DEBUG nova.compute.manager [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Refreshing instance network info cache due to event network-changed-bace8310-2635-4b16-b54b-7b961bf7c42a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:34:39 np0005466030 nova_compute[230518]: 2025-10-02 12:34:39.719 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:40.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:40.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.123 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.176 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Releasing lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.177 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance network_info: |[{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.177 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.178 2 DEBUG nova.network.neutron [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Refreshing network info cache for port bace8310-2635-4b16-b54b-7b961bf7c42a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.181 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start _get_guest_xml network_info=[{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.184 2 WARNING nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.189 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.190 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.193 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.193 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.195 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.195 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.196 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.196 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.196 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.197 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.197 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.197 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.201 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.515 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.517 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.517 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.518 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.518 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.520 2 INFO nova.compute.manager [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Terminating instance#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.521 2 DEBUG nova.compute.manager [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:34:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:34:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1368115747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.697 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:42.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.723 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.727 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:42.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:42 np0005466030 kernel: tap21510dd4-b1 (unregistering): left promiscuous mode
Oct  2 08:34:42 np0005466030 NetworkManager[44960]: <info>  [1759408482.7486] device (tap21510dd4-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:34:42 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:42Z|00371|binding|INFO|Releasing lport 21510dd4-b155-46ed-bdb2-dc17a9149353 from this chassis (sb_readonly=0)
Oct  2 08:34:42 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:42Z|00372|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 down in Southbound
Oct  2 08:34:42 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:42Z|00373|binding|INFO|Removing iface tap21510dd4-b1 ovn-installed in OVS
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.776 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.777 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.779 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.800 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f077f6ea-15fc-4bbc-b3d7-33019f5ec09f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:42 np0005466030 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct  2 08:34:42 np0005466030 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000054.scope: Consumed 6.314s CPU time.
Oct  2 08:34:42 np0005466030 systemd-machined[188247]: Machine qemu-43-instance-00000054 terminated.
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.826 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d3a08a-62be-4e31-b30a-fdf1fa1eafb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.831 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4627f6c4-6546-4bd9-9f80-8782e48cbf44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.860 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7f931e8c-77e1-4fa3-9037-5155f7dd4d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.876 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[78508f67-b00b-4266-aafb-fc2079b96f5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264988, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.897 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8ac19f-cdc6-4548-9edc-147e9e02edd5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264989, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264989, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.899 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.952 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance destroyed successfully.#033[00m
Oct  2 08:34:42 np0005466030 nova_compute[230518]: 2025-10-02 12:34:42.952 2 DEBUG nova.objects.instance [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.003 2 DEBUG nova.virt.libvirt.vif [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:37Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.003 2 DEBUG nova.network.os_vif_util [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.004 2 DEBUG nova.network.os_vif_util [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.004 2 DEBUG os_vif [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21510dd4-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.010 2 INFO os_vif [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.196 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.197 2 DEBUG nova.virt.libvirt.vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-2',id=89,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:33Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=51a18f7c-ed1b-4500-9d74-fb924f62b6d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.198 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.198 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.200 2 DEBUG nova.objects.instance [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.236 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <uuid>51a18f7c-ed1b-4500-9d74-fb924f62b6d9</uuid>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <name>instance-00000059</name>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <nova:name>tempest-tempest.common.compute-instance-1446553806-2</nova:name>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:34:42</nova:creationTime>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:user uuid="34a9da53e0cc446593d0cea2f498c53e">tempest-MultipleCreateTestJSON-1074010337-project-member</nova:user>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:project uuid="ed58e2bfccb04353b29ae652cfed3546">tempest-MultipleCreateTestJSON-1074010337</nova:project>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <nova:port uuid="bace8310-2635-4b16-b54b-7b961bf7c42a">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <entry name="serial">51a18f7c-ed1b-4500-9d74-fb924f62b6d9</entry>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <entry name="uuid">51a18f7c-ed1b-4500-9d74-fb924f62b6d9</entry>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:9d:e5:bf"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <target dev="tapbace8310-26"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/console.log" append="off"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:34:43 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:34:43 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:34:43 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:34:43 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Preparing to wait for external event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.238 2 DEBUG nova.virt.libvirt.vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-2',id=89,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:33Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=51a18f7c-ed1b-4500-9d74-fb924f62b6d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.238 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG os_vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.244 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbace8310-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbace8310-26, col_values=(('external_ids', {'iface-id': 'bace8310-2635-4b16-b54b-7b961bf7c42a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:e5:bf', 'vm-uuid': '51a18f7c-ed1b-4500-9d74-fb924f62b6d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:43 np0005466030 NetworkManager[44960]: <info>  [1759408483.2476] manager: (tapbace8310-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.252 2 INFO os_vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26')#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.526 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.526 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.527 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No VIF found with MAC fa:16:3e:9d:e5:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.527 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Using config drive#033[00m
Oct  2 08:34:43 np0005466030 nova_compute[230518]: 2025-10-02 12:34:43.624 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.150 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Creating config drive at /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.162 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwgj0p1f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.325 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwgj0p1f" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.367 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.373 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.417 2 DEBUG nova.network.neutron [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updated VIF entry in instance network info cache for port bace8310-2635-4b16-b54b-7b961bf7c42a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.418 2 DEBUG nova.network.neutron [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.500 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.620 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.620 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deleting local config drive /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config because it was imported into RBD.#033[00m
Oct  2 08:34:44 np0005466030 NetworkManager[44960]: <info>  [1759408484.6659] manager: (tapbace8310-26): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Oct  2 08:34:44 np0005466030 kernel: tapbace8310-26: entered promiscuous mode
Oct  2 08:34:44 np0005466030 systemd-udevd[264961]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:34:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:44Z|00374|binding|INFO|Claiming lport bace8310-2635-4b16-b54b-7b961bf7c42a for this chassis.
Oct  2 08:34:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:44Z|00375|binding|INFO|bace8310-2635-4b16-b54b-7b961bf7c42a: Claiming fa:16:3e:9d:e5:bf 10.100.0.6
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:44 np0005466030 NetworkManager[44960]: <info>  [1759408484.6792] device (tapbace8310-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:34:44 np0005466030 NetworkManager[44960]: <info>  [1759408484.6826] device (tapbace8310-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:34:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:44Z|00376|binding|INFO|Setting lport bace8310-2635-4b16-b54b-7b961bf7c42a ovn-installed in OVS
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:44 np0005466030 nova_compute[230518]: 2025-10-02 12:34:44.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:44 np0005466030 systemd-machined[188247]: New machine qemu-44-instance-00000059.
Oct  2 08:34:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:44.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.711 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e5:bf 10.100.0.6'], port_security=['fa:16:3e:9d:e5:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '51a18f7c-ed1b-4500-9d74-fb924f62b6d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bace8310-2635-4b16-b54b-7b961bf7c42a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:44Z|00377|binding|INFO|Setting lport bace8310-2635-4b16-b54b-7b961bf7c42a up in Southbound
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.712 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bace8310-2635-4b16-b54b-7b961bf7c42a in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 bound to our chassis#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.713 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7#033[00m
Oct  2 08:34:44 np0005466030 systemd[1]: Started Virtual Machine qemu-44-instance-00000059.
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.726 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[69e41897-cfea-4f27-bf79-ab5d990d3530]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.727 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap885ece2c-b1 in ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:34:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:34:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:44.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.729 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap885ece2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.729 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8542feff-5cfe-42b5-af95-14b634f87067]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.730 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2daf04-20c4-4699-a98f-089e8d3c5254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.741 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[76c8b135-08e1-4fcd-8ba1-183be84a96db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.763 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e54524c7-ae34-44e2-b93e-a33e00093281]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.786 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c34762-00d6-461f-9000-7143e54d9364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 NetworkManager[44960]: <info>  [1759408484.7934] manager: (tap885ece2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/176)
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.793 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7df47bec-a8c7-4977-8b0b-7f679a05ed81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.827 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc6736e-517c-4905-af1b-db40d5aed0ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.829 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[de99e60d-6584-4954-afa2-6109166aefb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 NetworkManager[44960]: <info>  [1759408484.8560] device (tap885ece2c-b0): carrier: link connected
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.861 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5516d72a-7247-45df-b580-bc9370e95806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.882 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da2e829a-86dd-4417-8d88-c91690f6181f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634841, 'reachable_time': 32773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265125, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.899 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[84d31e4b-6a66-437e-8498-cd156fe0f6d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:5893'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634841, 'tstamp': 634841}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265126, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.925 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9bbe8b-b256-44b1-9628-1c0ff7b857fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634841, 'reachable_time': 32773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265127, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.968 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3db6baaf-e7e2-4913-beb0-075eb430f4a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.042 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7c9420-7809-47e0-b7d5-21c3fce496e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.044 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.044 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.044 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap885ece2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:45 np0005466030 NetworkManager[44960]: <info>  [1759408485.0468] manager: (tap885ece2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Oct  2 08:34:45 np0005466030 kernel: tap885ece2c-b0: entered promiscuous mode
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.053 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap885ece2c-b0, col_values=(('external_ids', {'iface-id': '24355553-27f6-4ebd-99c0-4f861ce0339d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:45Z|00378|binding|INFO|Releasing lport 24355553-27f6-4ebd-99c0-4f861ce0339d from this chassis (sb_readonly=0)
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.070 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92494d14-e85e-44eb-b42e-33e21e0919c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.071 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:34:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.072 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'env', 'PROCESS_TAG=haproxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:34:45 np0005466030 podman[265159]: 2025-10-02 12:34:45.482561002 +0000 UTC m=+0.059661159 container create 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:34:45 np0005466030 systemd[1]: Started libpod-conmon-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6.scope.
Oct  2 08:34:45 np0005466030 podman[265159]: 2025-10-02 12:34:45.453852739 +0000 UTC m=+0.030952926 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:34:45 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:34:45 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883c67a361fe0247ad395efa4ec52619f249067861dcbb2d49f5ac611aaf3b3d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:34:45 np0005466030 podman[265159]: 2025-10-02 12:34:45.594493133 +0000 UTC m=+0.171593370 container init 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:34:45 np0005466030 podman[265159]: 2025-10-02 12:34:45.600340547 +0000 UTC m=+0.177440704 container start 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 08:34:45 np0005466030 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : New worker (265221) forked
Oct  2 08:34:45 np0005466030 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : Loading success.
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.649 2 INFO nova.virt.libvirt.driver [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting instance files /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.650 2 INFO nova.virt.libvirt.driver [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deletion of /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del complete#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.701 2 INFO nova.compute.manager [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 3.18 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.702 2 DEBUG oslo.service.loopingcall [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.703 2 DEBUG nova.compute.manager [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:34:45 np0005466030 nova_compute[230518]: 2025-10-02 12:34:45.703 2 DEBUG nova.network.neutron [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.090 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408486.0905228, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.091 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Started (Lifecycle Event)#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.101 2 DEBUG nova.compute.manager [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.101 2 DEBUG oslo_concurrency.lockutils [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.102 2 DEBUG oslo_concurrency.lockutils [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.102 2 DEBUG oslo_concurrency.lockutils [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.103 2 DEBUG nova.compute.manager [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Processing event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.103 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.107 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.113 2 INFO nova.virt.libvirt.driver [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance spawned successfully.#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.113 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.126 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.130 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.236 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.236 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408486.0906394, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.237 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.242 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.242 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.243 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.244 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.244 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.245 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.315 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.319 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408486.1075497, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.319 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.376 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.381 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.444 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.457 2 INFO nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 13.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.458 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.590 2 INFO nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 14.48 seconds to build instance.#033[00m
Oct  2 08:34:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:46.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.707 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:46.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.940 2 DEBUG nova.compute.manager [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.941 2 DEBUG oslo_concurrency.lockutils [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.942 2 DEBUG oslo_concurrency.lockutils [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.942 2 DEBUG oslo_concurrency.lockutils [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.943 2 DEBUG nova.compute.manager [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:46 np0005466030 nova_compute[230518]: 2025-10-02 12:34:46.944 2 DEBUG nova.compute.manager [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.074 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.074 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.137 2 DEBUG nova.network.neutron [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.170 2 INFO nova.compute.manager [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 1.47 seconds to deallocate network for instance.#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.222 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.223 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.229 2 DEBUG nova.compute.manager [req-cac75106-6c89-4be0-ad61-54b55e7ac879 req-bdd8c23c-4270-4714-b345-e481ae837f25 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-deleted-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.297 2 DEBUG oslo_concurrency.processutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2709834565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.696 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2943096415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.744 2 DEBUG oslo_concurrency.processutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.751 2 DEBUG nova.compute.provider_tree [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.769 2 DEBUG nova.scheduler.client.report [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.780 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.780 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.783 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.784 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.796 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.872 2 INFO nova.scheduler.client.report [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocations for instance 418e3157-f0a7-42ec-812b-2a4a2ad00991#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.954 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.974 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.975 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4192MB free_disk=20.828514099121094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.976 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:47 np0005466030 nova_compute[230518]: 2025-10-02 12:34:47.976 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.054 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.054 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.055 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.055 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.113 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.213 2 DEBUG nova.compute.manager [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.214 2 DEBUG oslo_concurrency.lockutils [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.215 2 DEBUG oslo_concurrency.lockutils [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.215 2 DEBUG oslo_concurrency.lockutils [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.215 2 DEBUG nova.compute.manager [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] No waiting events found dispatching network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.216 2 WARNING nova.compute.manager [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received unexpected event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1981752576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.564 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.568 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.585 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.628 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:48.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:48.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.930 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.930 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.931 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.931 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.932 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.934 2 INFO nova.compute.manager [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Terminating instance#033[00m
Oct  2 08:34:48 np0005466030 nova_compute[230518]: 2025-10-02 12:34:48.936 2 DEBUG nova.compute.manager [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.087 2 DEBUG nova.compute.manager [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.088 2 DEBUG oslo_concurrency.lockutils [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.089 2 DEBUG oslo_concurrency.lockutils [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.089 2 DEBUG oslo_concurrency.lockutils [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.090 2 DEBUG nova.compute.manager [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.090 2 WARNING nova.compute.manager [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:34:49 np0005466030 kernel: tapbace8310-26 (unregistering): left promiscuous mode
Oct  2 08:34:49 np0005466030 NetworkManager[44960]: <info>  [1759408489.1349] device (tapbace8310-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:49Z|00379|binding|INFO|Releasing lport bace8310-2635-4b16-b54b-7b961bf7c42a from this chassis (sb_readonly=0)
Oct  2 08:34:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:49Z|00380|binding|INFO|Setting lport bace8310-2635-4b16-b54b-7b961bf7c42a down in Southbound
Oct  2 08:34:49 np0005466030 ovn_controller[129257]: 2025-10-02T12:34:49Z|00381|binding|INFO|Removing iface tapbace8310-26 ovn-installed in OVS
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.219 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e5:bf 10.100.0.6'], port_security=['fa:16:3e:9d:e5:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '51a18f7c-ed1b-4500-9d74-fb924f62b6d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bace8310-2635-4b16-b54b-7b961bf7c42a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:34:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.222 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bace8310-2635-4b16-b54b-7b961bf7c42a in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 unbound from our chassis#033[00m
Oct  2 08:34:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.225 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:34:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b035525b-dd78-4cca-866a-8d457ccb117b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.227 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace which is not needed anymore#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005466030 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct  2 08:34:49 np0005466030 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000059.scope: Consumed 4.231s CPU time.
Oct  2 08:34:49 np0005466030 systemd-machined[188247]: Machine qemu-44-instance-00000059 terminated.
Oct  2 08:34:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.372 2 INFO nova.virt.libvirt.driver [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance destroyed successfully.#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.373 2 DEBUG nova.objects.instance [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'resources' on Instance uuid 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.393 2 DEBUG nova.virt.libvirt.vif [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-2',id=89,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:34:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:46Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=51a18f7c-ed1b-4500-9d74-fb924f62b6d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.394 2 DEBUG nova.network.os_vif_util [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.395 2 DEBUG nova.network.os_vif_util [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.395 2 DEBUG os_vif [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbace8310-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:34:49 np0005466030 nova_compute[230518]: 2025-10-02 12:34:49.402 2 INFO os_vif [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26')#033[00m
Oct  2 08:34:49 np0005466030 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : haproxy version is 2.8.14-c23fe91
Oct  2 08:34:49 np0005466030 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : path to executable is /usr/sbin/haproxy
Oct  2 08:34:49 np0005466030 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [WARNING]  (265216) : Exiting Master process...
Oct  2 08:34:49 np0005466030 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [ALERT]    (265216) : Current worker (265221) exited with code 143 (Terminated)
Oct  2 08:34:49 np0005466030 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [WARNING]  (265216) : All workers exited. Exiting... (0)
Oct  2 08:34:49 np0005466030 systemd[1]: libpod-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6.scope: Deactivated successfully.
Oct  2 08:34:49 np0005466030 podman[265323]: 2025-10-02 12:34:49.446250884 +0000 UTC m=+0.094588247 container died 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:34:49 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:34:49 np0005466030 systemd[1]: var-lib-containers-storage-overlay-883c67a361fe0247ad395efa4ec52619f249067861dcbb2d49f5ac611aaf3b3d-merged.mount: Deactivated successfully.
Oct  2 08:34:49 np0005466030 podman[265323]: 2025-10-02 12:34:49.829258015 +0000 UTC m=+0.477595388 container cleanup 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:34:49 np0005466030 systemd[1]: libpod-conmon-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6.scope: Deactivated successfully.
Oct  2 08:34:50 np0005466030 podman[265383]: 2025-10-02 12:34:50.048554645 +0000 UTC m=+0.195951397 container remove 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.054 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[23bf6b5a-bc0f-4832-b40a-f460b3efe918]: (4, ('Thu Oct  2 12:34:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6)\n534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6\nThu Oct  2 12:34:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6)\n534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.056 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04c77fad-282e-49e7-ace1-4210f47580f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.058 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:50 np0005466030 kernel: tap885ece2c-b0: left promiscuous mode
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.085 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d8c6ff-9beb-4a62-a4f2-20e54305a720]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.115 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2743a5c8-7cbd-4d24-a5ba-e328729be266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.116 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2315ebd1-1814-4694-847c-e92113470ba5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.131 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3baa375-4a93-4d51-a350-54b7ee3d674c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634834, 'reachable_time': 31859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265398, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:50 np0005466030 systemd[1]: run-netns-ovnmeta\x2d885ece2c\x2db1ca\x2d4d5a\x2d9ddf\x2d20d1baf155c7.mount: Deactivated successfully.
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.133 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:34:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.133 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6325e9-4fe3-4c3a-a947-85946185905f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.287 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-unplugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] No waiting events found dispatching network-vif-unplugged-bace8310-2635-4b16-b54b-7b961bf7c42a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.289 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-unplugged-bace8310-2635-4b16-b54b-7b961bf7c42a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.289 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.289 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.290 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.290 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.290 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] No waiting events found dispatching network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:34:50 np0005466030 nova_compute[230518]: 2025-10-02 12:34:50.290 2 WARNING nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received unexpected event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:34:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:50.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:50.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:51 np0005466030 podman[265401]: 2025-10-02 12:34:51.814459967 +0000 UTC m=+0.061930719 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:34:51 np0005466030 podman[265400]: 2025-10-02 12:34:51.878202923 +0000 UTC m=+0.121954148 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:34:52 np0005466030 nova_compute[230518]: 2025-10-02 12:34:52.624 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:52 np0005466030 nova_compute[230518]: 2025-10-02 12:34:52.625 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:52 np0005466030 nova_compute[230518]: 2025-10-02 12:34:52.626 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:52.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:52.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:52 np0005466030 nova_compute[230518]: 2025-10-02 12:34:52.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:53 np0005466030 nova_compute[230518]: 2025-10-02 12:34:53.754 2 INFO nova.virt.libvirt.driver [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deleting instance files /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_del#033[00m
Oct  2 08:34:53 np0005466030 nova_compute[230518]: 2025-10-02 12:34:53.755 2 INFO nova.virt.libvirt.driver [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deletion of /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_del complete#033[00m
Oct  2 08:34:53 np0005466030 nova_compute[230518]: 2025-10-02 12:34:53.888 2 INFO nova.compute.manager [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 4.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:34:53 np0005466030 nova_compute[230518]: 2025-10-02 12:34:53.889 2 DEBUG oslo.service.loopingcall [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:34:53 np0005466030 nova_compute[230518]: 2025-10-02 12:34:53.890 2 DEBUG nova.compute.manager [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:34:53 np0005466030 nova_compute[230518]: 2025-10-02 12:34:53.891 2 DEBUG nova.network.neutron [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.109 2 DEBUG nova.compute.manager [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.248 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.249 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.364 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.382 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.382 2 INFO nova.compute.claims [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.383 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.450 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.644 2 INFO nova.compute.resource_tracker [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating resource usage from migration cc0c0504-9cb4-4e3c-94ee-f1413511b3ed#033[00m
Oct  2 08:34:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:54.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.718 2 DEBUG nova.network.neutron [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.726 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:54.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.852 2 DEBUG nova.compute.manager [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-deleted-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.853 2 INFO nova.compute.manager [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Neutron deleted interface bace8310-2635-4b16-b54b-7b961bf7c42a; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.854 2 DEBUG nova.network.neutron [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.858 2 INFO nova.compute.manager [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 0.97 seconds to deallocate network for instance.#033[00m
Oct  2 08:34:54 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.956 2 DEBUG nova.compute.manager [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Detach interface failed, port_id=bace8310-2635-4b16-b54b-7b961bf7c42a, reason: Instance 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:54.999 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1320649666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.265 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.274 2 DEBUG nova.compute.provider_tree [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.448 2 DEBUG nova.scheduler.client.report [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.470 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.471 2 INFO nova.compute.manager [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Migrating#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.477 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.518 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.519 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.519 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:34:55 np0005466030 nova_compute[230518]: 2025-10-02 12:34:55.578 2 DEBUG oslo_concurrency.processutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:34:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:34:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2693978985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.057 2 DEBUG oslo_concurrency.processutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.065 2 DEBUG nova.compute.provider_tree [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.084 2 DEBUG nova.scheduler.client.report [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.110 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.146 2 INFO nova.scheduler.client.report [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Deleted allocations for instance 51a18f7c-ed1b-4500-9d74-fb924f62b6d9#033[00m
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.217 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:34:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:56.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:56.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.839 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.857 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:34:56 np0005466030 nova_compute[230518]: 2025-10-02 12:34:56.993 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:34:57 np0005466030 nova_compute[230518]: 2025-10-02 12:34:57.000 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:34:57 np0005466030 nova_compute[230518]: 2025-10-02 12:34:57.949 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408482.9472244, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:34:57 np0005466030 nova_compute[230518]: 2025-10-02 12:34:57.949 2 INFO nova.compute.manager [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:34:57 np0005466030 nova_compute[230518]: 2025-10-02 12:34:57.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:34:57 np0005466030 nova_compute[230518]: 2025-10-02 12:34:57.971 2 DEBUG nova.compute.manager [None req-074743f8-fd30-4ac1-a1e3-1d917cc03dc0 - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:34:58 np0005466030 nova_compute[230518]: 2025-10-02 12:34:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:34:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:34:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:58.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:34:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:34:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:34:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:58.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:34:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:34:59 np0005466030 nova_compute[230518]: 2025-10-02 12:34:59.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.020 2 INFO nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:35:00 np0005466030 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct  2 08:35:00 np0005466030 NetworkManager[44960]: <info>  [1759408500.1983] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:35:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:00Z|00382|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct  2 08:35:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:00Z|00383|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:00Z|00384|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.214 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '10', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.215 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.217 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.218 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b08fa33-0855-4860-8372-55f99d23c3b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.218 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct  2 08:35:00 np0005466030 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000004d.scope: Consumed 18.005s CPU time.
Oct  2 08:35:00 np0005466030 systemd-machined[188247]: Machine qemu-41-instance-0000004d terminated.
Oct  2 08:35:00 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : haproxy version is 2.8.14-c23fe91
Oct  2 08:35:00 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : path to executable is /usr/sbin/haproxy
Oct  2 08:35:00 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [WARNING]  (263503) : Exiting Master process...
Oct  2 08:35:00 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [ALERT]    (263503) : Current worker (263505) exited with code 143 (Terminated)
Oct  2 08:35:00 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [WARNING]  (263503) : All workers exited. Exiting... (0)
Oct  2 08:35:00 np0005466030 systemd[1]: libpod-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515.scope: Deactivated successfully.
Oct  2 08:35:00 np0005466030 podman[265515]: 2025-10-02 12:35:00.390631305 +0000 UTC m=+0.083483218 container died 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.453 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.454 2 DEBUG nova.virt.libvirt.vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.455 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.455 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.455 2 DEBUG os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.463 2 INFO os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.466 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.466 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:00 np0005466030 systemd[1]: var-lib-containers-storage-overlay-ed56eda4117edd4a73c69ba3e2c0e2d327a36dd0522d2b795d433383050ab310-merged.mount: Deactivated successfully.
Oct  2 08:35:00 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515-userdata-shm.mount: Deactivated successfully.
Oct  2 08:35:00 np0005466030 podman[265515]: 2025-10-02 12:35:00.645109462 +0000 UTC m=+0.337961375 container cleanup 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.653 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Port a3bd0009-d256-4937-bdad-606abfd076e0 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Oct  2 08:35:00 np0005466030 podman[265554]: 2025-10-02 12:35:00.711535912 +0000 UTC m=+0.044401468 container remove 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.717 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[621cb94f-6149-4e20-a3e6-8f513f5e3709]: (4, ('Thu Oct  2 12:35:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515)\n9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515\nThu Oct  2 12:35:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515)\n9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.719 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0c9c0d-ee5d-4a0d-a1a0-08713d2eba3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.721 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.731 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[df910375-45fa-42f7-9a90-bed845a63ad5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:00 np0005466030 systemd[1]: libpod-conmon-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515.scope: Deactivated successfully.
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.752 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e94f5cd5-e3b4-43b1-a1fe-90276c855d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:00.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.753 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1d8ca8-35b3-4bd4-8279-dfd1a0ff17ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.769 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d96460fe-a496-48c8-b33f-c7c6ab1045a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626314, 'reachable_time': 29344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265569, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.772 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:35:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.772 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a34c99-1cbf-4ad7-8b0e-4ae0253ca4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.844 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.845 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.845 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.911 2 DEBUG nova.compute.manager [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.911 2 DEBUG oslo_concurrency.lockutils [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.911 2 DEBUG oslo_concurrency.lockutils [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.912 2 DEBUG oslo_concurrency.lockutils [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.912 2 DEBUG nova.compute.manager [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:00 np0005466030 nova_compute[230518]: 2025-10-02 12:35:00.912 2 WARNING nova.compute.manager [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:35:01 np0005466030 nova_compute[230518]: 2025-10-02 12:35:01.490 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:01 np0005466030 nova_compute[230518]: 2025-10-02 12:35:01.490 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:01 np0005466030 nova_compute[230518]: 2025-10-02 12:35:01.491 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:35:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:02.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:02.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:02 np0005466030 nova_compute[230518]: 2025-10-02 12:35:02.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.091 2 DEBUG nova.compute.manager [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.092 2 DEBUG oslo_concurrency.lockutils [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.092 2 DEBUG oslo_concurrency.lockutils [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.092 2 DEBUG oslo_concurrency.lockutils [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.093 2 DEBUG nova.compute.manager [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:03 np0005466030 nova_compute[230518]: 2025-10-02 12:35:03.094 2 WARNING nova.compute.manager [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:35:03 np0005466030 podman[265571]: 2025-10-02 12:35:03.837887617 +0000 UTC m=+0.086622796 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:35:03 np0005466030 podman[265570]: 2025-10-02 12:35:03.846905731 +0000 UTC m=+0.096057552 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:35:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.370 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408489.3698776, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.371 2 INFO nova.compute.manager [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.469 2 DEBUG nova.compute.manager [None req-4ce3332c-2917-4f21-b7f9-6ee8b50e5f49 - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.593 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.617 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.625 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.626 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.626 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:04.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.741 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.743 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.744 2 INFO nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating image(s)#033[00m
Oct  2 08:35:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:04.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:04 np0005466030 nova_compute[230518]: 2025-10-02 12:35:04.853 2 DEBUG nova.storage.rbd_utils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] creating snapshot(nova-resize) on rbd image(3e490470-5e33-4140-95c1-367805364c73_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:35:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:05.298 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:05 np0005466030 nova_compute[230518]: 2025-10-02 12:35:05.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:05.300 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:35:05 np0005466030 nova_compute[230518]: 2025-10-02 12:35:05.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1426181501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Oct  2 08:35:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:06.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:06 np0005466030 nova_compute[230518]: 2025-10-02 12:35:06.826 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:06 np0005466030 nova_compute[230518]: 2025-10-02 12:35:06.869 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:06 np0005466030 nova_compute[230518]: 2025-10-02 12:35:06.869 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:35:07 np0005466030 nova_compute[230518]: 2025-10-02 12:35:07.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.196 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.370 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.371 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Ensure instance console log exists: /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.371 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.372 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.372 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.374 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.379 2 WARNING nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.384 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.384 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.387 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.388 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:08 np0005466030 nova_compute[230518]: 2025-10-02 12:35:08.445 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:08.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3868317450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.216 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.771s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.275 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223741338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.722 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.724 2 DEBUG nova.virt.libvirt.vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.725 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.726 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.730 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <name>instance-0000004d</name>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <memory>196608</memory>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:35:08</nova:creationTime>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.micro">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:memory>192</nova:memory>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:7b:e8:97"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <target dev="tapa3bd0009-d2"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:35:09 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:35:09 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:35:09 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:35:09 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.731 2 DEBUG nova.virt.libvirt.vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.731 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.732 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.732 2 DEBUG os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466030 NetworkManager[44960]: <info>  [1759408509.7396] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.745 2 INFO os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.872 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.872 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.872 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:7b:e8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.873 2 INFO nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Using config drive#033[00m
Oct  2 08:35:09 np0005466030 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct  2 08:35:09 np0005466030 NetworkManager[44960]: <info>  [1759408509.9827] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Oct  2 08:35:09 np0005466030 nova_compute[230518]: 2025-10-02 12:35:09.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:09 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:09Z|00385|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct  2 08:35:09 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:09Z|00386|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:35:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:10Z|00387|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct  2 08:35:10 np0005466030 systemd-udevd[265777]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:10 np0005466030 systemd-machined[188247]: New machine qemu-45-instance-0000004d.
Oct  2 08:35:10 np0005466030 NetworkManager[44960]: <info>  [1759408510.0401] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:35:10 np0005466030 NetworkManager[44960]: <info>  [1759408510.0409] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:35:10 np0005466030 systemd[1]: Started Virtual Machine qemu-45-instance-0000004d.
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.044 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '11', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.045 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:35:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:10Z|00388|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.047 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.064 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4371a471-4b52-401e-841c-dda9a4bbeaf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.065 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.068 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[74d879e1-9153-448b-8bc5-5bdd40ed4ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.070 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e13bfb93-3abc-4b18-a4db-f4c837d4db8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.082 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6494f3bd-013b-4617-b579-fbd7aef12f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72414d36-cc54-45df-bf9c-3b6d7a4e859c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.141 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a453441f-9266-4381-86b4-b41848844e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 NetworkManager[44960]: <info>  [1759408510.1516] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Oct  2 08:35:10 np0005466030 systemd-udevd[265780]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.154 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c33861-871f-40e2-8487-026f71ddb550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.191 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6346a180-b3d8-4119-8814-e1bfea782477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.194 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d242825c-a630-44dd-8657-92f4d3c34c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 NetworkManager[44960]: <info>  [1759408510.2167] device (tapf011efa4-00): carrier: link connected
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.221 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1ad628-34d0-48df-bb1c-148ff5f62b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.242 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[77e0bc4e-cd1f-4070-8142-3aad6472a0cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637377, 'reachable_time': 40349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265811, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.263 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e34fe8e4-a812-42e5-9ca2-ef69ad7eadfa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637377, 'tstamp': 637377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265812, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.290 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b896ea7-2ad1-41d3-a701-8258bfbcec99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637377, 'reachable_time': 40349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265813, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.332 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da5e4823-c292-46ca-bbec-af3e8136500d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.427 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3832b9-7fff-4e59-83d5-3ef8c779ab3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.430 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.430 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.431 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:10 np0005466030 NetworkManager[44960]: <info>  [1759408510.4800] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Oct  2 08:35:10 np0005466030 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.496 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:10Z|00389|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.530 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.531 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4917124b-4c73-41e2-abe1-de453650e2e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.532 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:35:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.533 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:35:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:10.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.957 2 DEBUG nova.compute.manager [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.959 2 DEBUG oslo_concurrency.lockutils [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.960 2 DEBUG oslo_concurrency.lockutils [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.960 2 DEBUG oslo_concurrency.lockutils [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.960 2 DEBUG nova.compute.manager [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:10 np0005466030 nova_compute[230518]: 2025-10-02 12:35:10.961 2 WARNING nova.compute.manager [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:35:11 np0005466030 podman[265844]: 2025-10-02 12:35:10.969582666 +0000 UTC m=+0.043978765 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:35:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:11.303 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.656 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.656 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408511.6556685, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.657 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.659 2 DEBUG nova.compute.manager [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.663 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance running successfully.#033[00m
Oct  2 08:35:11 np0005466030 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.665 2 DEBUG nova.virt.libvirt.guest [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.665 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.790 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.795 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.856 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.857 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408511.6597936, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:11 np0005466030 nova_compute[230518]: 2025-10-02 12:35:11.857 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)#033[00m
Oct  2 08:35:12 np0005466030 nova_compute[230518]: 2025-10-02 12:35:12.096 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:12 np0005466030 nova_compute[230518]: 2025-10-02 12:35:12.100 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:12 np0005466030 podman[265844]: 2025-10-02 12:35:12.407628532 +0000 UTC m=+1.482024551 container create 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:35:12 np0005466030 systemd[1]: Started libpod-conmon-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d.scope.
Oct  2 08:35:12 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:35:12 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd4a7dfe49440492c7b1837cfdf74d652207fb9a54ea584d72d946905023818/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:35:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:12 np0005466030 podman[265844]: 2025-10-02 12:35:12.959016961 +0000 UTC m=+2.033413050 container init 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:35:12 np0005466030 nova_compute[230518]: 2025-10-02 12:35:12.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:12 np0005466030 podman[265844]: 2025-10-02 12:35:12.969958775 +0000 UTC m=+2.044354814 container start 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:35:12 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : New worker (265908) forked
Oct  2 08:35:12 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : Loading success.
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.288 2 DEBUG nova.compute.manager [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.289 2 DEBUG oslo_concurrency.lockutils [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.289 2 DEBUG oslo_concurrency.lockutils [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.290 2 DEBUG oslo_concurrency.lockutils [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.290 2 DEBUG nova.compute.manager [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.291 2 WARNING nova.compute.manager [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.395 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.395 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.430 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.572 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.573 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.580 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.580 2 INFO nova.compute.claims [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:35:13 np0005466030 nova_compute[230518]: 2025-10-02 12:35:13.907 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2851248602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.345 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.352 2 DEBUG nova.compute.provider_tree [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.523 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.524 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.524 2 DEBUG nova.compute.manager [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Going to confirm migration 13 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.541 2 DEBUG nova.scheduler.client.report [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.702 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.702 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:14.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.913 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:35:14 np0005466030 nova_compute[230518]: 2025-10-02 12:35:14.913 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.115 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.566 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.733 2 INFO nova.virt.block_device [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Booting with volume 9283cdcd-9233-4064-a250-5d9e278af430 at /dev/vda#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.886 2 DEBUG os_brick.utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.887 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.900 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.901 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[b2273a70-19b7-45a3-8381-a089d73d9f27]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.902 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.912 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.913 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d65bc848-8a63-4750-8acd-8d1c76e4af31]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.914 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.924 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.925 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5a455e-b623-4b75-b1d5-ca2ad6bffb11]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.926 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d664f54e-5c72-4b08-a2d7-19e417999e81]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.927 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.968 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.971 2 DEBUG os_brick.initiator.connectors.lightos [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.971 2 DEBUG os_brick.initiator.connectors.lightos [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.972 2 DEBUG os_brick.initiator.connectors.lightos [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.972 2 DEBUG os_brick.utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:35:15 np0005466030 nova_compute[230518]: 2025-10-02 12:35:15.973 2 DEBUG nova.virt.block_device [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating existing volume attachment record: 748e3c4e-64ff-474b-9b2d-f528e6710211 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:35:16 np0005466030 nova_compute[230518]: 2025-10-02 12:35:16.674 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:16 np0005466030 nova_compute[230518]: 2025-10-02 12:35:16.675 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:16 np0005466030 nova_compute[230518]: 2025-10-02 12:35:16.675 2 DEBUG nova.network.neutron [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:35:16 np0005466030 nova_compute[230518]: 2025-10-02 12:35:16.676 2 DEBUG nova.objects.instance [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:16 np0005466030 nova_compute[230518]: 2025-10-02 12:35:16.703 2 DEBUG nova.policy [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '32f902c540fc464cb232c0a6942a5d22', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c7832aaed82459e908e73712013728c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:35:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:16.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/265947530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:16.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:17 np0005466030 nova_compute[230518]: 2025-10-02 12:35:17.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.094 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.096 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.097 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Creating image(s)#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.097 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.098 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Ensure instance console log exists: /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.098 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.099 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.099 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.228 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Successfully created port: be086260-f45b-41eb-ae9f-f401c64b9b22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:35:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:35:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:35:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:35:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:18.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:18.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.876 2 DEBUG nova.network.neutron [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.909 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:18 np0005466030 nova_compute[230518]: 2025-10-02 12:35:18.910 2 DEBUG nova.objects.instance [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.277 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Successfully updated port: be086260-f45b-41eb-ae9f-f401c64b9b22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.279 2 DEBUG nova.compute.manager [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG nova.compute.manager [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing instance network info cache due to event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG oslo_concurrency.lockutils [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG oslo_concurrency.lockutils [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG nova.network.neutron [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing network info cache for port be086260-f45b-41eb-ae9f-f401c64b9b22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.282 2 DEBUG nova.storage.rbd_utils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] removing snapshot(nova-resize) on rbd image(3e490470-5e33-4140-95c1-367805364c73_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.307 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.471 2 DEBUG nova.network.neutron [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:35:19 np0005466030 nova_compute[230518]: 2025-10-02 12:35:19.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.435 2 DEBUG nova.network.neutron [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.450 2 DEBUG oslo_concurrency.lockutils [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.451 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquired lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.452 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.637 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:35:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:20.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.797 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.798 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:20 np0005466030 nova_compute[230518]: 2025-10-02 12:35:20.957 2 DEBUG oslo_concurrency.processutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:20 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Oct  2 08:35:20 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:20.983776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:35:20 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Oct  2 08:35:20 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520983848, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 931, "num_deletes": 250, "total_data_size": 1754211, "memory_usage": 1783960, "flush_reason": "Manual Compaction"}
Oct  2 08:35:20 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521149884, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 743955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41482, "largest_seqno": 42408, "table_properties": {"data_size": 740342, "index_size": 1329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10119, "raw_average_key_size": 21, "raw_value_size": 732439, "raw_average_value_size": 1525, "num_data_blocks": 59, "num_entries": 480, "num_filter_entries": 480, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408456, "oldest_key_time": 1759408456, "file_creation_time": 1759408520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 166140 microseconds, and 3409 cpu microseconds.
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.149926) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 743955 bytes OK
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.149944) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172144) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172204) EVENT_LOG_v1 {"time_micros": 1759408521172178, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1749424, prev total WAL file size 1749424, number of live WAL files 2.
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172967) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(726KB)], [78(11MB)]
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521173000, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 12693801, "oldest_snapshot_seqno": -1}
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6657 keys, 9261034 bytes, temperature: kUnknown
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521387706, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9261034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9217804, "index_size": 25454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 171032, "raw_average_key_size": 25, "raw_value_size": 9099889, "raw_average_value_size": 1366, "num_data_blocks": 1012, "num_entries": 6657, "num_filter_entries": 6657, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408521, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.387987) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9261034 bytes
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.406504) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.1 rd, 43.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(29.5) write-amplify(12.4) OK, records in: 7144, records dropped: 487 output_compression: NoCompression
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.406543) EVENT_LOG_v1 {"time_micros": 1759408521406529, "job": 48, "event": "compaction_finished", "compaction_time_micros": 214791, "compaction_time_cpu_micros": 23170, "output_level": 6, "num_output_files": 1, "total_output_size": 9261034, "num_input_records": 7144, "num_output_records": 6657, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521406934, "job": 48, "event": "table_file_deletion", "file_number": 80}
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521409546, "job": 48, "event": "table_file_deletion", "file_number": 78}
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/112017634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.469 2 DEBUG oslo_concurrency.processutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.474 2 DEBUG nova.compute.provider_tree [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.502 2 DEBUG nova.scheduler.client.report [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.554 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.620 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.638 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Releasing lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.638 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance network_info: |[{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.644 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start _get_guest_xml network_info=[{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9283cdcd-9233-4064-a250-5d9e278af430', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9283cdcd-9233-4064-a250-5d9e278af430', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f', 'attached_at': '', 'detached_at': '', 'volume_id': '9283cdcd-9233-4064-a250-5d9e278af430', 'serial': '9283cdcd-9233-4064-a250-5d9e278af430'}, 'boot_index': 0, 'attachment_id': '748e3c4e-64ff-474b-9b2d-f528e6710211', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.650 2 WARNING nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.656 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.658 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.663 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.664 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.666 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.666 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.667 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.667 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.668 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.668 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.668 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.669 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.669 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.670 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.670 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.670 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.725 2 DEBUG nova.storage.rbd_utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] rbd image 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.729 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.753 2 INFO nova.scheduler.client.report [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocation for migration cc0c0504-9cb4-4e3c-94ee-f1413511b3ed#033[00m
Oct  2 08:35:21 np0005466030 nova_compute[230518]: 2025-10-02 12:35:21.834 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:35:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3810058863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.231 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.255 2 DEBUG nova.virt.libvirt.vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-816873398',display_name='tempest-ServersTestBootFromVolume-server-816873398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-816873398',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCExruwC5OJykxsUr85pmn2haRmMagZ7/mdNoJJYNggBWSlgN4YogOa/CvEZn0TcoJzbhFnO92dhNEzaWy4hNrf2HiyGznIRspY3dORZWHStfDh0jHNyWV7YXYrY8DMDPw==',key_name='tempest-keypair-581118509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c7832aaed82459e908e73712013728c',ramdisk_id='',reservation_id='r-a0kjnjfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1573186401',owner_user_name='tempest-ServersTestBootFromVolume-1573186401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='32f902c540fc464cb232c0a6942a5d22',uuid=00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.256 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converting VIF {"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.259 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.261 2 DEBUG nova.objects.instance [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lazy-loading 'pci_devices' on Instance uuid 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.277 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <uuid>00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f</uuid>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <name>instance-0000005c</name>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersTestBootFromVolume-server-816873398</nova:name>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:35:21</nova:creationTime>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:user uuid="32f902c540fc464cb232c0a6942a5d22">tempest-ServersTestBootFromVolume-1573186401-project-member</nova:user>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:project uuid="5c7832aaed82459e908e73712013728c">tempest-ServersTestBootFromVolume-1573186401</nova:project>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <nova:port uuid="be086260-f45b-41eb-ae9f-f401c64b9b22">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <entry name="serial">00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f</entry>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <entry name="uuid">00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f</entry>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-9283cdcd-9233-4064-a250-5d9e278af430">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <serial>9283cdcd-9233-4064-a250-5d9e278af430</serial>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:57:aa:f7"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <target dev="tapbe086260-f4"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/console.log" append="off"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:35:22 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:35:22 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:35:22 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:35:22 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.277 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Preparing to wait for external event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.277 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.278 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.278 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.278 2 DEBUG nova.virt.libvirt.vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-816873398',display_name='tempest-ServersTestBootFromVolume-server-816873398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-816873398',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCExruwC5OJykxsUr85pmn2haRmMagZ7/mdNoJJYNggBWSlgN4YogOa/CvEZn0TcoJzbhFnO92dhNEzaWy4hNrf2HiyGznIRspY3dORZWHStfDh0jHNyWV7YXYrY8DMDPw==',key_name='tempest-keypair-581118509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c7832aaed82459e908e73712013728c',ramdisk_id='',reservation_id='r-a0kjnjfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1573186401',owner_user_name='tempest-ServersTestBootFromVolume-1573186401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='32f902c540fc464cb232c0a6942a5d22',uuid=00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.279 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converting VIF {"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.279 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.279 2 DEBUG os_vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe086260-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe086260-f4, col_values=(('external_ids', {'iface-id': 'be086260-f45b-41eb-ae9f-f401c64b9b22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:aa:f7', 'vm-uuid': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:22 np0005466030 NetworkManager[44960]: <info>  [1759408522.2877] manager: (tapbe086260-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.295 2 INFO os_vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4')#033[00m
Oct  2 08:35:22 np0005466030 podman[266177]: 2025-10-02 12:35:22.408072792 +0000 UTC m=+0.074160025 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:35:22 np0005466030 podman[266175]: 2025-10-02 12:35:22.443742015 +0000 UTC m=+0.110861719 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.466 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.467 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.467 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] No VIF found with MAC fa:16:3e:57:aa:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.468 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Using config drive#033[00m
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.497 2 DEBUG nova.storage.rbd_utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] rbd image 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:22.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:22.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:22 np0005466030 nova_compute[230518]: 2025-10-02 12:35:22.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:23 np0005466030 nova_compute[230518]: 2025-10-02 12:35:23.691 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Creating config drive at /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config#033[00m
Oct  2 08:35:23 np0005466030 nova_compute[230518]: 2025-10-02 12:35:23.697 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprxj1m0j_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:23 np0005466030 nova_compute[230518]: 2025-10-02 12:35:23.838 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprxj1m0j_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:23 np0005466030 nova_compute[230518]: 2025-10-02 12:35:23.867 2 DEBUG nova.storage.rbd_utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] rbd image 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:35:23 np0005466030 nova_compute[230518]: 2025-10-02 12:35:23.871 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:24.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:25Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.404 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.405 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deleting local config drive /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config because it was imported into RBD.#033[00m
Oct  2 08:35:25 np0005466030 NetworkManager[44960]: <info>  [1759408525.4632] manager: (tapbe086260-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/183)
Oct  2 08:35:25 np0005466030 kernel: tapbe086260-f4: entered promiscuous mode
Oct  2 08:35:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:25Z|00390|binding|INFO|Claiming lport be086260-f45b-41eb-ae9f-f401c64b9b22 for this chassis.
Oct  2 08:35:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:25Z|00391|binding|INFO|be086260-f45b-41eb-ae9f-f401c64b9b22: Claiming fa:16:3e:57:aa:f7 10.100.0.7
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.479 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:aa:f7 10.100.0.7'], port_security=['fa:16:3e:57:aa:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-520fbe64-8b09-43ab-a51d-71daa1f67084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c7832aaed82459e908e73712013728c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'daeb3977-3f78-41e9-ac0e-5cb1a13474c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30b0a67c-101b-4cc3-b721-1d431685a667, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=be086260-f45b-41eb-ae9f-f401c64b9b22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.484 138374 INFO neutron.agent.ovn.metadata.agent [-] Port be086260-f45b-41eb-ae9f-f401c64b9b22 in datapath 520fbe64-8b09-43ab-a51d-71daa1f67084 bound to our chassis#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.487 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 520fbe64-8b09-43ab-a51d-71daa1f67084#033[00m
Oct  2 08:35:25 np0005466030 systemd-udevd[266292]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:25Z|00392|binding|INFO|Setting lport be086260-f45b-41eb-ae9f-f401c64b9b22 ovn-installed in OVS
Oct  2 08:35:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:25Z|00393|binding|INFO|Setting lport be086260-f45b-41eb-ae9f-f401c64b9b22 up in Southbound
Oct  2 08:35:25 np0005466030 systemd-machined[188247]: New machine qemu-46-instance-0000005c.
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.498 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91e4e20c-9b62-4b7a-b17d-0c4000befa99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.499 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap520fbe64-81 in ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.502 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap520fbe64-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.503 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9846fc50-cf3e-4205-af7f-bab6f4554bdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5cd543-1484-4b17-8892-7daf77cfdf21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 NetworkManager[44960]: <info>  [1759408525.5115] device (tapbe086260-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:35:25 np0005466030 systemd[1]: Started Virtual Machine qemu-46-instance-0000005c.
Oct  2 08:35:25 np0005466030 NetworkManager[44960]: <info>  [1759408525.5140] device (tapbe086260-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.531 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f7090f-b3e0-45d7-8189-b529c4c9fe3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.557 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7a019e-4d39-4876-b693-5f94d61e5afe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.590 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[81978bf7-b3de-4283-a96e-83ee605d1242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.595 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1f3c15-2c2d-46f3-b710-81efe93228fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 systemd-udevd[266295]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:35:25 np0005466030 NetworkManager[44960]: <info>  [1759408525.5983] manager: (tap520fbe64-80): new Veth device (/org/freedesktop/NetworkManager/Devices/184)
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.633 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[545bc8d9-558d-4c21-9b12-73f9a0a7b28c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.636 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[32f6df5f-58f5-423e-b94c-1b4ffd00a35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 NetworkManager[44960]: <info>  [1759408525.6617] device (tap520fbe64-80): carrier: link connected
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.668 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[df50f8ba-f664-4be5-82e9-7fdf58068a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.692 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[005b7d2b-6493-4348-91cf-3e3bfa8490a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap520fbe64-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:a1:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638921, 'reachable_time': 19450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266324, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.708 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1f945342-db62-43aa-8545-a1a472b66e7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:a1ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638921, 'tstamp': 638921}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266325, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.729 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0f8b7e-5e35-4969-9e9a-bb5d631e853d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap520fbe64-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:a1:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638921, 'reachable_time': 19450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266326, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.759 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c89cede1-8806-4c3b-a5cd-510993417bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.816 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcfcaed-4635-4898-9ae1-10eb8a22e326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap520fbe64-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:35:25 np0005466030 kernel: tap520fbe64-80: entered promiscuous mode
Oct  2 08:35:25 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap520fbe64-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:25 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 NetworkManager[44960]: <info>  [1759408525.8201] manager: (tap520fbe64-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap520fbe64-80, col_values=(('external_ids', {'iface-id': '16d9692f-0ff0-4286-9c67-9ab6ce457f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:25Z|00394|binding|INFO|Releasing lport 16d9692f-0ff0-4286-9c67-9ab6ce457f57 from this chassis (sb_readonly=0)
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.843 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/520fbe64-8b09-43ab-a51d-71daa1f67084.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/520fbe64-8b09-43ab-a51d-71daa1f67084.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.847 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca291df-8149-4eb7-9735-976abd4aa47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.848 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-520fbe64-8b09-43ab-a51d-71daa1f67084
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/520fbe64-8b09-43ab-a51d-71daa1f67084.pid.haproxy
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 520fbe64-8b09-43ab-a51d-71daa1f67084
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.848 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'env', 'PROCESS_TAG=haproxy-520fbe64-8b09-43ab-a51d-71daa1f67084', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/520fbe64-8b09-43ab-a51d-71daa1f67084.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.878 2 DEBUG nova.compute.manager [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG oslo_concurrency.lockutils [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG oslo_concurrency.lockutils [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG oslo_concurrency.lockutils [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:25 np0005466030 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG nova.compute.manager [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Processing event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:26 np0005466030 podman[266360]: 2025-10-02 12:35:26.221245128 +0000 UTC m=+0.039372670 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:35:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:26.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:26 np0005466030 podman[266360]: 2025-10-02 12:35:26.975326915 +0000 UTC m=+0.793454367 container create 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:35:27 np0005466030 systemd[1]: Started libpod-conmon-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054.scope.
Oct  2 08:35:27 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:35:27 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d831a68f21466534269406494e441ba9768beaf4f9233af088e058997fb9c0d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:35:27 np0005466030 podman[266360]: 2025-10-02 12:35:27.129577158 +0000 UTC m=+0.947704620 container init 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:35:27 np0005466030 podman[266360]: 2025-10-02 12:35:27.13569123 +0000 UTC m=+0.953818682 container start 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:35:27 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : New worker (266414) forked
Oct  2 08:35:27 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : Loading success.
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.265 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.267 2 INFO nova.compute.manager [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Terminating instance#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.268 2 DEBUG nova.compute.manager [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct  2 08:35:27 np0005466030 NetworkManager[44960]: <info>  [1759408527.5167] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:35:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:27Z|00395|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct  2 08:35:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:27Z|00396|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:27Z|00397|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.550 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '12', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.551 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.553 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.554 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64b656b0-f1bc-4b1b-9e0e-dd9f8be65287]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.554 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct  2 08:35:27 np0005466030 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000004d.scope: Consumed 13.762s CPU time.
Oct  2 08:35:27 np0005466030 systemd-machined[188247]: Machine qemu-45-instance-0000004d terminated.
Oct  2 08:35:27 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : haproxy version is 2.8.14-c23fe91
Oct  2 08:35:27 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : path to executable is /usr/sbin/haproxy
Oct  2 08:35:27 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [WARNING]  (265906) : Exiting Master process...
Oct  2 08:35:27 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [ALERT]    (265906) : Current worker (265908) exited with code 143 (Terminated)
Oct  2 08:35:27 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [WARNING]  (265906) : All workers exited. Exiting... (0)
Oct  2 08:35:27 np0005466030 NetworkManager[44960]: <info>  [1759408527.6888] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Oct  2 08:35:27 np0005466030 systemd[1]: libpod-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d.scope: Deactivated successfully.
Oct  2 08:35:27 np0005466030 podman[266453]: 2025-10-02 12:35:27.695097481 +0000 UTC m=+0.042303551 container died 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.710 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.710 2 DEBUG nova.objects.instance [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:27 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d-userdata-shm.mount: Deactivated successfully.
Oct  2 08:35:27 np0005466030 systemd[1]: var-lib-containers-storage-overlay-6cd4a7dfe49440492c7b1837cfdf74d652207fb9a54ea584d72d946905023818-merged.mount: Deactivated successfully.
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.736 2 DEBUG nova.virt.libvirt.vif [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.737 2 DEBUG nova.network.os_vif_util [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.738 2 DEBUG nova.network.os_vif_util [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.738 2 DEBUG os_vif [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:27 np0005466030 podman[266453]: 2025-10-02 12:35:27.741568553 +0000 UTC m=+0.088774583 container cleanup 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.745 2 INFO os_vif [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')#033[00m
Oct  2 08:35:27 np0005466030 systemd[1]: libpod-conmon-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d.scope: Deactivated successfully.
Oct  2 08:35:27 np0005466030 podman[266496]: 2025-10-02 12:35:27.80279259 +0000 UTC m=+0.037041397 container remove 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.808 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebba1de-e204-4ca7-bfd5-748d9c8886ee]: (4, ('Thu Oct  2 12:35:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d)\n895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d\nThu Oct  2 12:35:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d)\n895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.810 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9e4deb-ab95-4460-bc56-95271d3a64f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.811 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:27 np0005466030 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.815 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408527.8150587, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.815 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.817 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.822 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.825 2 INFO nova.virt.libvirt.driver [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance spawned successfully.#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.826 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44cb17a2-0a1a-4ecd-8565-3e71dfa5f43c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.850 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ed24d0f4-1b41-472c-b1c8-b9abf8002c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.851 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da1b9b40-1e73-4969-8b5a-6e99d70e770d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.863 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e62fd57-447a-4c2e-ab83-ecddf62e9d9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637369, 'reachable_time': 22776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266520, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.865 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:35:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.865 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d7857945-67ae-4eef-a061-31f1fb790f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:27 np0005466030 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.898 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.901 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.911 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.911 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.912 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.912 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.913 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.913 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.952 2 DEBUG nova.compute.manager [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.952 2 DEBUG oslo_concurrency.lockutils [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.952 2 DEBUG oslo_concurrency.lockutils [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.953 2 DEBUG oslo_concurrency.lockutils [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.953 2 DEBUG nova.compute.manager [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.953 2 DEBUG nova.compute.manager [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.956 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.957 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408527.8152015, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.957 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:35:27 np0005466030 nova_compute[230518]: 2025-10-02 12:35:27.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.004 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.007 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408527.818995, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.007 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.023 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.026 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.029 2 INFO nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 9.93 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.030 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.052 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.093 2 INFO nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 14.56 seconds to build instance.#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.117 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.228 2 DEBUG nova.compute.manager [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.228 2 DEBUG oslo_concurrency.lockutils [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.229 2 DEBUG oslo_concurrency.lockutils [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.229 2 DEBUG oslo_concurrency.lockutils [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.229 2 DEBUG nova.compute.manager [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] No waiting events found dispatching network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.229 2 WARNING nova.compute.manager [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received unexpected event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.720 2 INFO nova.virt.libvirt.driver [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deleting instance files /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73_del#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.721 2 INFO nova.virt.libvirt.driver [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deletion of /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73_del complete#033[00m
Oct  2 08:35:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:28.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:28.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.823 2 INFO nova.compute.manager [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 1.55 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.824 2 DEBUG oslo.service.loopingcall [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.825 2 DEBUG nova.compute.manager [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:35:28 np0005466030 nova_compute[230518]: 2025-10-02 12:35:28.825 2 DEBUG nova.network.neutron [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:35:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:35:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:35:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.129 2 DEBUG nova.compute.manager [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.130 2 DEBUG oslo_concurrency.lockutils [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.130 2 DEBUG oslo_concurrency.lockutils [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.131 2 DEBUG oslo_concurrency.lockutils [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.131 2 DEBUG nova.compute.manager [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.131 2 WARNING nova.compute.manager [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:35:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:30.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:30.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.901 2 DEBUG nova.network.neutron [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:30 np0005466030 nova_compute[230518]: 2025-10-02 12:35:30.940 2 INFO nova.compute.manager [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 2.12 seconds to deallocate network for instance.#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.065 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.066 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.071 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.096 2 DEBUG nova.compute.manager [req-32b1b632-6e4a-4ab6-9963-3b152aa52604 req-504434ab-b065-40b8-a2f1-9bd058262ddb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-deleted-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.144 2 INFO nova.scheduler.client.report [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocations for instance 3e490470-5e33-4140-95c1-367805364c73#033[00m
Oct  2 08:35:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.331 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.864 2 DEBUG nova.compute.manager [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.865 2 DEBUG nova.compute.manager [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing instance network info cache due to event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.865 2 DEBUG oslo_concurrency.lockutils [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.865 2 DEBUG oslo_concurrency.lockutils [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:35:31 np0005466030 nova_compute[230518]: 2025-10-02 12:35:31.866 2 DEBUG nova.network.neutron [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing network info cache for port be086260-f45b-41eb-ae9f-f401c64b9b22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:35:32 np0005466030 nova_compute[230518]: 2025-10-02 12:35:32.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:32.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:32 np0005466030 nova_compute[230518]: 2025-10-02 12:35:32.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:35:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:35:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:34 np0005466030 podman[266572]: 2025-10-02 12:35:34.813239184 +0000 UTC m=+0.065783951 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  2 08:35:34 np0005466030 podman[266573]: 2025-10-02 12:35:34.813200383 +0000 UTC m=+0.064446539 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:35:35 np0005466030 nova_compute[230518]: 2025-10-02 12:35:35.874 2 DEBUG nova.network.neutron [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updated VIF entry in instance network info cache for port be086260-f45b-41eb-ae9f-f401c64b9b22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:35:35 np0005466030 nova_compute[230518]: 2025-10-02 12:35:35.874 2 DEBUG nova.network.neutron [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:35 np0005466030 nova_compute[230518]: 2025-10-02 12:35:35.915 2 DEBUG oslo_concurrency.lockutils [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:35:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:36Z|00398|binding|INFO|Releasing lport 16d9692f-0ff0-4286-9c67-9ab6ce457f57 from this chassis (sb_readonly=0)
Oct  2 08:35:36 np0005466030 nova_compute[230518]: 2025-10-02 12:35:36.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:37 np0005466030 nova_compute[230518]: 2025-10-02 12:35:37.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:37 np0005466030 nova_compute[230518]: 2025-10-02 12:35:37.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:38.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:35:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 30K writes, 117K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 30K writes, 10K syncs, 2.86 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9003 writes, 34K keys, 9003 commit groups, 1.0 writes per commit group, ingest: 37.27 MB, 0.06 MB/s#012Interval WAL: 9003 writes, 3403 syncs, 2.65 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:35:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:35:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:35:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:40.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:42 np0005466030 nova_compute[230518]: 2025-10-02 12:35:42.703 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408527.7015674, 3e490470-5e33-4140-95c1-367805364c73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:35:42 np0005466030 nova_compute[230518]: 2025-10-02 12:35:42.703 2 INFO nova.compute.manager [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:35:42 np0005466030 nova_compute[230518]: 2025-10-02 12:35:42.726 2 DEBUG nova.compute.manager [None req-447bcf87-76a3-4b45-ada7-5669e7df94ba - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:35:42 np0005466030 nova_compute[230518]: 2025-10-02 12:35:42.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:35:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:42.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:35:42 np0005466030 nova_compute[230518]: 2025-10-02 12:35:42.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:43Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:aa:f7 10.100.0.7
Oct  2 08:35:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:43Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:aa:f7 10.100.0.7
Oct  2 08:35:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:44 np0005466030 nova_compute[230518]: 2025-10-02 12:35:44.720 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:44.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:44.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:45 np0005466030 nova_compute[230518]: 2025-10-02 12:35:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.067 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3582682749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.503 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.670 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.671 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.817 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.818 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4317MB free_disk=20.95226287841797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.819 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.819 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.905 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.906 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.906 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:35:47 np0005466030 nova_compute[230518]: 2025-10-02 12:35:47.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.113 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.215 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.215 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.238 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.283 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.323 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2847610097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.755 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.761 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.784 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:48.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:35:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.840 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:35:48 np0005466030 nova_compute[230518]: 2025-10-02 12:35:48.840 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.811 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.811 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.812 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.812 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.812 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.813 2 INFO nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Terminating instance#033[00m
Oct  2 08:35:49 np0005466030 nova_compute[230518]: 2025-10-02 12:35:49.814 2 DEBUG nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:35:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.136 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.140 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:35:50 np0005466030 kernel: tapbe086260-f4 (unregistering): left promiscuous mode
Oct  2 08:35:50 np0005466030 NetworkManager[44960]: <info>  [1759408550.1534] device (tapbe086260-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:50Z|00399|binding|INFO|Releasing lport be086260-f45b-41eb-ae9f-f401c64b9b22 from this chassis (sb_readonly=0)
Oct  2 08:35:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:50Z|00400|binding|INFO|Setting lport be086260-f45b-41eb-ae9f-f401c64b9b22 down in Southbound
Oct  2 08:35:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:35:50Z|00401|binding|INFO|Removing iface tapbe086260-f4 ovn-installed in OVS
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.183 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:aa:f7 10.100.0.7'], port_security=['fa:16:3e:57:aa:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-520fbe64-8b09-43ab-a51d-71daa1f67084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c7832aaed82459e908e73712013728c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'daeb3977-3f78-41e9-ac0e-5cb1a13474c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30b0a67c-101b-4cc3-b721-1d431685a667, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=be086260-f45b-41eb-ae9f-f401c64b9b22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:35:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.185 138374 INFO neutron.agent.ovn.metadata.agent [-] Port be086260-f45b-41eb-ae9f-f401c64b9b22 in datapath 520fbe64-8b09-43ab-a51d-71daa1f67084 unbound from our chassis#033[00m
Oct  2 08:35:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.187 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 520fbe64-8b09-43ab-a51d-71daa1f67084, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.188 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff1ee42-efe9-475d-a46e-c63287a06f81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.189 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 namespace which is not needed anymore#033[00m
Oct  2 08:35:50 np0005466030 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct  2 08:35:50 np0005466030 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005c.scope: Consumed 14.495s CPU time.
Oct  2 08:35:50 np0005466030 systemd-machined[188247]: Machine qemu-46-instance-0000005c terminated.
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.439 2 DEBUG nova.compute.manager [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-unplugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.439 2 DEBUG oslo_concurrency.lockutils [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.439 2 DEBUG oslo_concurrency.lockutils [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.440 2 DEBUG oslo_concurrency.lockutils [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.440 2 DEBUG nova.compute.manager [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] No waiting events found dispatching network-vif-unplugged-be086260-f45b-41eb-ae9f-f401c64b9b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.440 2 DEBUG nova.compute.manager [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-unplugged-be086260-f45b-41eb-ae9f-f401c64b9b22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : haproxy version is 2.8.14-c23fe91
Oct  2 08:35:50 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : path to executable is /usr/sbin/haproxy
Oct  2 08:35:50 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [WARNING]  (266397) : Exiting Master process...
Oct  2 08:35:50 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [WARNING]  (266397) : Exiting Master process...
Oct  2 08:35:50 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [ALERT]    (266397) : Current worker (266414) exited with code 143 (Terminated)
Oct  2 08:35:50 np0005466030 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [WARNING]  (266397) : All workers exited. Exiting... (0)
Oct  2 08:35:50 np0005466030 systemd[1]: libpod-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054.scope: Deactivated successfully.
Oct  2 08:35:50 np0005466030 podman[266683]: 2025-10-02 12:35:50.466817913 +0000 UTC m=+0.176728291 container died 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.468 2 INFO nova.virt.libvirt.driver [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance destroyed successfully.#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.469 2 DEBUG nova.objects.instance [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lazy-loading 'resources' on Instance uuid 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.513 2 DEBUG nova.virt.libvirt.vif [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-816873398',display_name='tempest-ServersTestBootFromVolume-server-816873398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-816873398',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCExruwC5OJykxsUr85pmn2haRmMagZ7/mdNoJJYNggBWSlgN4YogOa/CvEZn0TcoJzbhFnO92dhNEzaWy4hNrf2HiyGznIRspY3dORZWHStfDh0jHNyWV7YXYrY8DMDPw==',key_name='tempest-keypair-581118509',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c7832aaed82459e908e73712013728c',ramdisk_id='',reservation_id='r-a0kjnjfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-1573186401',owner_user_name='tempest-ServersTestBootFromVolume-1573186401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='32f902c540fc464cb232c0a6942a5d22',uuid=00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.514 2 DEBUG nova.network.os_vif_util [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converting VIF {"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.514 2 DEBUG nova.network.os_vif_util [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.515 2 DEBUG os_vif [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe086260-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:50 np0005466030 nova_compute[230518]: 2025-10-02 12:35:50.521 2 INFO os_vif [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4')#033[00m
Oct  2 08:35:50 np0005466030 systemd[1]: var-lib-containers-storage-overlay-d831a68f21466534269406494e441ba9768beaf4f9233af088e058997fb9c0d1-merged.mount: Deactivated successfully.
Oct  2 08:35:50 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054-userdata-shm.mount: Deactivated successfully.
Oct  2 08:35:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:50.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:51 np0005466030 podman[266683]: 2025-10-02 12:35:51.044055665 +0000 UTC m=+0.753966053 container cleanup 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:35:51 np0005466030 systemd[1]: libpod-conmon-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054.scope: Deactivated successfully.
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.142 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:51 np0005466030 podman[266740]: 2025-10-02 12:35:51.27112339 +0000 UTC m=+0.201595995 container remove 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.282 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[640427fc-0684-41d8-accb-ce5395cb135c]: (4, ('Thu Oct  2 12:35:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 (34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054)\n34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054\nThu Oct  2 12:35:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 (34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054)\n34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.283 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[662ce29b-e101-4a9e-9691-74e61f9936b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.284 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap520fbe64-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:35:51 np0005466030 kernel: tap520fbe64-80: left promiscuous mode
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b198d322-941f-472c-9075-ab308ca42bf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.336 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[61b824c5-1129-457d-a504-7d484d17fc0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.337 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd81386c-0075-46d6-9b07-75ac89ca9da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.355 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[90590b6f-dd26-4b33-aa68-fe4bad76f152]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638914, 'reachable_time': 16215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266755, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.358 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:35:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.358 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fd3472-7582-4599-b7da-2cff8300a3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:35:51 np0005466030 systemd[1]: run-netns-ovnmeta\x2d520fbe64\x2d8b09\x2d43ab\x2da51d\x2d71daa1f67084.mount: Deactivated successfully.
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.828 2 INFO nova.virt.libvirt.driver [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deleting instance files /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_del#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.829 2 INFO nova.virt.libvirt.driver [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deletion of /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_del complete#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.918 2 INFO nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 2.10 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.918 2 DEBUG oslo.service.loopingcall [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.919 2 DEBUG nova.compute.manager [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:35:51 np0005466030 nova_compute[230518]: 2025-10-02 12:35:51.919 2 DEBUG nova.network.neutron [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG nova.compute.manager [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG oslo_concurrency.lockutils [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG oslo_concurrency.lockutils [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG oslo_concurrency.lockutils [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG nova.compute.manager [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] No waiting events found dispatching network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.698 2 WARNING nova.compute.manager [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received unexpected event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:35:52 np0005466030 podman[266758]: 2025-10-02 12:35:52.805981542 +0000 UTC m=+0.048388453 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:35:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:52.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:52 np0005466030 podman[266757]: 2025-10-02 12:35:52.831125763 +0000 UTC m=+0.076626042 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:35:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:52 np0005466030 nova_compute[230518]: 2025-10-02 12:35:52.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:53 np0005466030 nova_compute[230518]: 2025-10-02 12:35:53.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:53 np0005466030 nova_compute[230518]: 2025-10-02 12:35:53.309 2 DEBUG nova.network.neutron [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:35:53 np0005466030 nova_compute[230518]: 2025-10-02 12:35:53.342 2 INFO nova.compute.manager [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 1.42 seconds to deallocate network for instance.#033[00m
Oct  2 08:35:53 np0005466030 nova_compute[230518]: 2025-10-02 12:35:53.473 2 DEBUG nova.compute.manager [req-ee2bc6a2-6c2f-49d0-a183-9a2198058600 req-878b718a-2cff-445b-b831-2df277a15276 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-deleted-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:35:53 np0005466030 nova_compute[230518]: 2025-10-02 12:35:53.736 2 INFO nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 0.39 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:35:53 np0005466030 nova_compute[230518]: 2025-10-02 12:35:53.738 2 DEBUG nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deleting volume: 9283cdcd-9233-4064-a250-5d9e278af430 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.444 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.445 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:35:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.522 2 DEBUG oslo_concurrency.processutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:35:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:54.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:35:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2667916916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.935 2 DEBUG oslo_concurrency.processutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.942 2 DEBUG nova.compute.provider_tree [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:35:54 np0005466030 nova_compute[230518]: 2025-10-02 12:35:54.968 2 DEBUG nova.scheduler.client.report [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:35:55 np0005466030 nova_compute[230518]: 2025-10-02 12:35:55.001 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:55 np0005466030 nova_compute[230518]: 2025-10-02 12:35:55.038 2 INFO nova.scheduler.client.report [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Deleted allocations for instance 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f#033[00m
Oct  2 08:35:55 np0005466030 nova_compute[230518]: 2025-10-02 12:35:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:55 np0005466030 nova_compute[230518]: 2025-10-02 12:35:55.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:35:55 np0005466030 nova_compute[230518]: 2025-10-02 12:35:55.118 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:35:55 np0005466030 nova_compute[230518]: 2025-10-02 12:35:55.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:35:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:35:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:35:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:35:56 np0005466030 nova_compute[230518]: 2025-10-02 12:35:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:56 np0005466030 nova_compute[230518]: 2025-10-02 12:35:56.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:56.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:57 np0005466030 nova_compute[230518]: 2025-10-02 12:35:57.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:35:58 np0005466030 nova_compute[230518]: 2025-10-02 12:35:58.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:35:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:58.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:35:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:35:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:35:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:35:59 np0005466030 nova_compute[230518]: 2025-10-02 12:35:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:35:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:35:59 np0005466030 nova_compute[230518]: 2025-10-02 12:35:59.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:00 np0005466030 nova_compute[230518]: 2025-10-02 12:36:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:00 np0005466030 nova_compute[230518]: 2025-10-02 12:36:00.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:00.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:02.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:02 np0005466030 nova_compute[230518]: 2025-10-02 12:36:02.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.069 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.466 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408550.4650056, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.466 2 INFO nova.compute.manager [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.485 2 DEBUG nova.compute.manager [None req-bf824c5c-990d-4c9e-9199-4d73ef704120 - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:05 np0005466030 nova_compute[230518]: 2025-10-02 12:36:05.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:05 np0005466030 podman[266826]: 2025-10-02 12:36:05.799096006 +0000 UTC m=+0.052357979 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:36:05 np0005466030 podman[266825]: 2025-10-02 12:36:05.806891641 +0000 UTC m=+0.059619047 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 08:36:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:06.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:06.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:07 np0005466030 nova_compute[230518]: 2025-10-02 12:36:07.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:08.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:36:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8306 writes, 42K keys, 8306 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 8306 writes, 8306 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1587 writes, 8023 keys, 1587 commit groups, 1.0 writes per commit group, ingest: 16.18 MB, 0.03 MB/s#012Interval WAL: 1587 writes, 1587 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.4      0.71              0.14        24    0.030       0      0       0.0       0.0#012  L6      1/0    8.83 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   4.0    136.3    113.0      1.84              0.56        23    0.080    126K    12K       0.0       0.0#012 Sum      1/0    8.83 MB   0.0      0.2     0.1      0.2       0.3      0.1       0.0   5.0     98.2    101.7      2.56              0.71        47    0.054    126K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     78.6     77.3      0.90              0.20        12    0.075     40K   3025       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0    136.3    113.0      1.84              0.56        23    0.080    126K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.6      0.71              0.14        23    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.051, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.25 GB write, 0.09 MB/s write, 0.25 GB read, 0.08 MB/s read, 2.6 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 27.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000179 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1611,26.74 MB,8.79489%) FilterBlock(47,363.17 KB,0.116664%) IndexBlock(47,636.23 KB,0.204382%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:36:10 np0005466030 nova_compute[230518]: 2025-10-02 12:36:10.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:12.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:12 np0005466030 nova_compute[230518]: 2025-10-02 12:36:12.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:14.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:14.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.414 2 DEBUG nova.compute.manager [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.729 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.729 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.770 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.817 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.818 2 INFO nova.compute.claims [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.818 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.847 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.912 2 INFO nova.compute.resource_tracker [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating resource usage from migration 375a3f6c-42dd-46ed-8db6-645863f744f6#033[00m
Oct  2 08:36:15 np0005466030 nova_compute[230518]: 2025-10-02 12:36:15.912 2 DEBUG nova.compute.resource_tracker [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Starting to track incoming migration 375a3f6c-42dd-46ed-8db6-645863f744f6 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Oct  2 08:36:16 np0005466030 nova_compute[230518]: 2025-10-02 12:36:16.037 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3738831801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:16 np0005466030 nova_compute[230518]: 2025-10-02 12:36:16.542 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:16 np0005466030 nova_compute[230518]: 2025-10-02 12:36:16.547 2 DEBUG nova.compute.provider_tree [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:16 np0005466030 nova_compute[230518]: 2025-10-02 12:36:16.586 2 DEBUG nova.scheduler.client.report [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:16 np0005466030 nova_compute[230518]: 2025-10-02 12:36:16.608 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:16 np0005466030 nova_compute[230518]: 2025-10-02 12:36:16.609 2 INFO nova.compute.manager [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Migrating#033[00m
Oct  2 08:36:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:16.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:16.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:17 np0005466030 nova_compute[230518]: 2025-10-02 12:36:17.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:18.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:18.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:20 np0005466030 systemd[1]: Created slice User Slice of UID 42436.
Oct  2 08:36:20 np0005466030 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct  2 08:36:20 np0005466030 systemd-logind[795]: New session 59 of user nova.
Oct  2 08:36:20 np0005466030 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct  2 08:36:20 np0005466030 systemd[1]: Starting User Manager for UID 42436...
Oct  2 08:36:20 np0005466030 systemd[266891]: Queued start job for default target Main User Target.
Oct  2 08:36:20 np0005466030 nova_compute[230518]: 2025-10-02 12:36:20.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:20 np0005466030 systemd[266891]: Created slice User Application Slice.
Oct  2 08:36:20 np0005466030 systemd[266891]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:36:20 np0005466030 systemd[266891]: Started Daily Cleanup of User's Temporary Directories.
Oct  2 08:36:20 np0005466030 systemd[266891]: Reached target Paths.
Oct  2 08:36:20 np0005466030 systemd[266891]: Reached target Timers.
Oct  2 08:36:20 np0005466030 systemd[266891]: Starting D-Bus User Message Bus Socket...
Oct  2 08:36:20 np0005466030 systemd[266891]: Starting Create User's Volatile Files and Directories...
Oct  2 08:36:20 np0005466030 systemd[266891]: Listening on D-Bus User Message Bus Socket.
Oct  2 08:36:20 np0005466030 systemd[266891]: Reached target Sockets.
Oct  2 08:36:20 np0005466030 systemd[266891]: Finished Create User's Volatile Files and Directories.
Oct  2 08:36:20 np0005466030 systemd[266891]: Reached target Basic System.
Oct  2 08:36:20 np0005466030 systemd[266891]: Reached target Main User Target.
Oct  2 08:36:20 np0005466030 systemd[266891]: Startup finished in 173ms.
Oct  2 08:36:20 np0005466030 systemd[1]: Started User Manager for UID 42436.
Oct  2 08:36:20 np0005466030 systemd[1]: Started Session 59 of User nova.
Oct  2 08:36:20 np0005466030 systemd[1]: session-59.scope: Deactivated successfully.
Oct  2 08:36:20 np0005466030 systemd-logind[795]: Session 59 logged out. Waiting for processes to exit.
Oct  2 08:36:20 np0005466030 systemd-logind[795]: Removed session 59.
Oct  2 08:36:20 np0005466030 systemd-logind[795]: New session 61 of user nova.
Oct  2 08:36:20 np0005466030 systemd[1]: Started Session 61 of User nova.
Oct  2 08:36:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:20 np0005466030 systemd[1]: session-61.scope: Deactivated successfully.
Oct  2 08:36:20 np0005466030 systemd-logind[795]: Session 61 logged out. Waiting for processes to exit.
Oct  2 08:36:20 np0005466030 systemd-logind[795]: Removed session 61.
Oct  2 08:36:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:22.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:23 np0005466030 nova_compute[230518]: 2025-10-02 12:36:23.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:23 np0005466030 podman[266914]: 2025-10-02 12:36:23.817864265 +0000 UTC m=+0.063499080 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:36:23 np0005466030 podman[266913]: 2025-10-02 12:36:23.864027007 +0000 UTC m=+0.109856788 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.075 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.300 2 DEBUG nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:24 np0005466030 nova_compute[230518]: 2025-10-02 12:36:24.302 2 WARNING nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:36:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:24.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:25 np0005466030 nova_compute[230518]: 2025-10-02 12:36:25.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:25 np0005466030 nova_compute[230518]: 2025-10-02 12:36:25.741 2 INFO nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating port b31bc9d2-5589-460c-9a78-a1d800087345 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:36:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:27 np0005466030 nova_compute[230518]: 2025-10-02 12:36:27.886 2 DEBUG nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:27 np0005466030 nova_compute[230518]: 2025-10-02 12:36:27.886 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:27 np0005466030 nova_compute[230518]: 2025-10-02 12:36:27.887 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:27 np0005466030 nova_compute[230518]: 2025-10-02 12:36:27.887 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:27 np0005466030 nova_compute[230518]: 2025-10-02 12:36:27.887 2 DEBUG nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:27 np0005466030 nova_compute[230518]: 2025-10-02 12:36:27.887 2 WARNING nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:36:28 np0005466030 nova_compute[230518]: 2025-10-02 12:36:28.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:28 np0005466030 nova_compute[230518]: 2025-10-02 12:36:28.834 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:28 np0005466030 nova_compute[230518]: 2025-10-02 12:36:28.835 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:28 np0005466030 nova_compute[230518]: 2025-10-02 12:36:28.835 2 DEBUG nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:36:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:30 np0005466030 nova_compute[230518]: 2025-10-02 12:36:30.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:36:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:36:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:36:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:30.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:30 np0005466030 systemd[1]: Stopping User Manager for UID 42436...
Oct  2 08:36:30 np0005466030 systemd[266891]: Activating special unit Exit the Session...
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped target Main User Target.
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped target Basic System.
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped target Paths.
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped target Sockets.
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped target Timers.
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  2 08:36:30 np0005466030 systemd[266891]: Closed D-Bus User Message Bus Socket.
Oct  2 08:36:30 np0005466030 systemd[266891]: Stopped Create User's Volatile Files and Directories.
Oct  2 08:36:30 np0005466030 systemd[266891]: Removed slice User Application Slice.
Oct  2 08:36:30 np0005466030 systemd[266891]: Reached target Shutdown.
Oct  2 08:36:30 np0005466030 systemd[266891]: Finished Exit the Session.
Oct  2 08:36:30 np0005466030 systemd[266891]: Reached target Exit the Session.
Oct  2 08:36:30 np0005466030 systemd[1]: user@42436.service: Deactivated successfully.
Oct  2 08:36:30 np0005466030 systemd[1]: Stopped User Manager for UID 42436.
Oct  2 08:36:31 np0005466030 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct  2 08:36:31 np0005466030 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct  2 08:36:31 np0005466030 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct  2 08:36:31 np0005466030 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct  2 08:36:31 np0005466030 systemd[1]: Removed slice User Slice of UID 42436.
Oct  2 08:36:31 np0005466030 nova_compute[230518]: 2025-10-02 12:36:31.197 2 DEBUG nova.compute.manager [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:31 np0005466030 nova_compute[230518]: 2025-10-02 12:36:31.198 2 DEBUG nova.compute.manager [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:36:31 np0005466030 nova_compute[230518]: 2025-10-02 12:36:31.198 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:31 np0005466030 nova_compute[230518]: 2025-10-02 12:36:31.676 2 DEBUG nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:31 np0005466030 nova_compute[230518]: 2025-10-02 12:36:31.849 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:31 np0005466030 nova_compute[230518]: 2025-10-02 12:36:31.856 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:31 np0005466030 nova_compute[230518]: 2025-10-02 12:36:31.856 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:36:32 np0005466030 nova_compute[230518]: 2025-10-02 12:36:32.143 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Oct  2 08:36:32 np0005466030 nova_compute[230518]: 2025-10-02 12:36:32.146 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:36:32 np0005466030 nova_compute[230518]: 2025-10-02 12:36:32.146 2 INFO nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Creating image(s)#033[00m
Oct  2 08:36:32 np0005466030 nova_compute[230518]: 2025-10-02 12:36:32.191 2 DEBUG nova.storage.rbd_utils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] creating snapshot(nova-resize) on rbd image(a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:36:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:32.287 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:32 np0005466030 nova_compute[230518]: 2025-10-02 12:36:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:32.289 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:36:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:33 np0005466030 nova_compute[230518]: 2025-10-02 12:36:33.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Oct  2 08:36:33 np0005466030 nova_compute[230518]: 2025-10-02 12:36:33.225 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.407 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.408 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Ensure instance console log exists: /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.408 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.409 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.409 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.411 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start _get_guest_xml network_info=[{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.415 2 WARNING nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.425 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.426 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.430 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.431 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.487 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:34.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:36:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4108097679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.939 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:34 np0005466030 nova_compute[230518]: 2025-10-02 12:36:34.976 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:36:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3385465174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.398 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.402 2 DEBUG nova.virt.libvirt.vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.402 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.404 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.409 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <uuid>a7ee799a-27f6-41a6-86dc-694c480fc3a1</uuid>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <name>instance-0000005d</name>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <memory>196608</memory>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestJSON-server-1748262975</nova:name>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:36:34</nova:creationTime>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.micro">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:memory>192</nova:memory>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <nova:port uuid="b31bc9d2-5589-460c-9a78-a1d800087345">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <entry name="serial">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <entry name="uuid">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:46:e0:75"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <target dev="tapb31bc9d2-55"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/console.log" append="off"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:36:35 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:36:35 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:36:35 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:36:35 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.411 2 DEBUG nova.virt.libvirt.vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.412 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.413 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.414 2 DEBUG os_vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb31bc9d2-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb31bc9d2-55, col_values=(('external_ids', {'iface-id': 'b31bc9d2-5589-460c-9a78-a1d800087345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:e0:75', 'vm-uuid': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.4261] manager: (tapb31bc9d2-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.434 2 INFO os_vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.519 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.519 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.520 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:46:e0:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.521 2 INFO nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Using config drive#033[00m
Oct  2 08:36:35 np0005466030 kernel: tapb31bc9d2-55: entered promiscuous mode
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.6329] manager: (tapb31bc9d2-55): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Oct  2 08:36:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:35Z|00402|binding|INFO|Claiming lport b31bc9d2-5589-460c-9a78-a1d800087345 for this chassis.
Oct  2 08:36:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:35Z|00403|binding|INFO|b31bc9d2-5589-460c-9a78-a1d800087345: Claiming fa:16:3e:46:e0:75 10.100.0.12
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.6484] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.6490] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct  2 08:36:35 np0005466030 systemd-udevd[267257]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.664 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.665 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.667 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff#033[00m
Oct  2 08:36:35 np0005466030 systemd-machined[188247]: New machine qemu-47-instance-0000005d.
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.677 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc3d769-f53c-4ccb-b2c7-15d2509d41f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.679 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.680 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbea2b9-c4d0-4254-9bf7-f54cb5a9d390]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d474fcfb-df02-46e4-8a6e-b1bce15ab6d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.6872] device (tapb31bc9d2-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.6879] device (tapb31bc9d2-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.695 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[23147e73-f273-45dd-81c4-71b05f927c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.722 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05ced32e-c8dd-43bb-9839-434d5dc62ff0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 systemd[1]: Started Virtual Machine qemu-47-instance-0000005d.
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.749 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2006e9b5-b96f-435a-8e2d-813b2f7b509a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.755 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c50a0c-c9e6-4eae-870b-5e342ba97b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.7562] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Oct  2 08:36:35 np0005466030 systemd-udevd[267262]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.785 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcc86a6-ed9d-48a9-b3cb-e20296ffa57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.787 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[59a651d7-067d-494f-860c-c62e1d37f29d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.8062] device (tapf011efa4-00): carrier: link connected
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.811 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0383039b-e35f-4af9-9fc5-94ed07d988cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.828 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cfab19ba-5c9c-4353-be45-a4eab07149a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645936, 'reachable_time': 44253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267291, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.841 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1374be-3291-4552-a988-8f32eadfe9cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 645936, 'tstamp': 645936}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267292, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[943c78dc-ae88-4e11-902e-2f69e1fb48eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645936, 'reachable_time': 44253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267293, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:35Z|00404|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 ovn-installed in OVS
Oct  2 08:36:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:35Z|00405|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 up in Southbound
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.886 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c958e2-e2ee-4871-a6b3-ad4eab2ced2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.948 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1be787-f722-4342-b378-9fba7b726183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.950 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.950 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.951 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 NetworkManager[44960]: <info>  [1759408595.9795] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct  2 08:36:35 np0005466030 kernel: tapf011efa4-00: entered promiscuous mode
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.985 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:35 np0005466030 nova_compute[230518]: 2025-10-02 12:36:35.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:35Z|00406|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct  2 08:36:36 np0005466030 nova_compute[230518]: 2025-10-02 12:36:36.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.017 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.018 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49556ded-19ae-4b07-be48-658def98cbd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.019 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:36:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.020 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:36:36 np0005466030 podman[267325]: 2025-10-02 12:36:36.382500154 +0000 UTC m=+0.062541488 container create 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:36:36 np0005466030 podman[267325]: 2025-10-02 12:36:36.343963651 +0000 UTC m=+0.024004985 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:36:36 np0005466030 systemd[1]: Started libpod-conmon-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope.
Oct  2 08:36:36 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:36:36 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df5e31392e82218d0f29695cbcb785ef01003b7693d75e9e22f28e44e1b45cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:36:36 np0005466030 podman[267325]: 2025-10-02 12:36:36.487182047 +0000 UTC m=+0.167223351 container init 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:36:36 np0005466030 podman[267325]: 2025-10-02 12:36:36.492017649 +0000 UTC m=+0.172058913 container start 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:36:36 np0005466030 podman[267338]: 2025-10-02 12:36:36.513126344 +0000 UTC m=+0.078399288 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:36:36 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : New worker (267385) forked
Oct  2 08:36:36 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : Loading success.
Oct  2 08:36:36 np0005466030 podman[267339]: 2025-10-02 12:36:36.526639159 +0000 UTC m=+0.091631964 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:36:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:36.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:36.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:37 np0005466030 nova_compute[230518]: 2025-10-02 12:36:37.518 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408597.5181305, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:37 np0005466030 nova_compute[230518]: 2025-10-02 12:36:37.519 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:36:37 np0005466030 nova_compute[230518]: 2025-10-02 12:36:37.521 2 DEBUG nova.compute.manager [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:36:37 np0005466030 nova_compute[230518]: 2025-10-02 12:36:37.525 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance running successfully.#033[00m
Oct  2 08:36:37 np0005466030 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct  2 08:36:37 np0005466030 nova_compute[230518]: 2025-10-02 12:36:37.527 2 DEBUG nova.virt.libvirt.guest [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 08:36:37 np0005466030 nova_compute[230518]: 2025-10-02 12:36:37.528 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.781 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.785 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.824 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.824 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408597.5183616, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.825 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Started (Lifecycle Event)#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.866 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.870 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:36:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:38.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:38.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.945 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.946 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.970 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Oct  2 08:36:38 np0005466030 nova_compute[230518]: 2025-10-02 12:36:38.992 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:36:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:36:39 np0005466030 nova_compute[230518]: 2025-10-02 12:36:39.990 2 DEBUG nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:39 np0005466030 nova_compute[230518]: 2025-10-02 12:36:39.991 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:39 np0005466030 nova_compute[230518]: 2025-10-02 12:36:39.991 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:39 np0005466030 nova_compute[230518]: 2025-10-02 12:36:39.992 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:39 np0005466030 nova_compute[230518]: 2025-10-02 12:36:39.992 2 DEBUG nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:39 np0005466030 nova_compute[230518]: 2025-10-02 12:36:39.992 2 WARNING nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:36:40 np0005466030 nova_compute[230518]: 2025-10-02 12:36:40.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:40.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:41.292 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:42 np0005466030 nova_compute[230518]: 2025-10-02 12:36:42.223 2 DEBUG nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:42 np0005466030 nova_compute[230518]: 2025-10-02 12:36:42.223 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:42 np0005466030 nova_compute[230518]: 2025-10-02 12:36:42.224 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:42 np0005466030 nova_compute[230518]: 2025-10-02 12:36:42.224 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:42 np0005466030 nova_compute[230518]: 2025-10-02 12:36:42.224 2 DEBUG nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:42 np0005466030 nova_compute[230518]: 2025-10-02 12:36:42.224 2 WARNING nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:36:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:42.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:43 np0005466030 nova_compute[230518]: 2025-10-02 12:36:43.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:43 np0005466030 nova_compute[230518]: 2025-10-02 12:36:43.717 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Port b31bc9d2-5589-460c-9a78-a1d800087345 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Oct  2 08:36:43 np0005466030 nova_compute[230518]: 2025-10-02 12:36:43.717 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:43 np0005466030 nova_compute[230518]: 2025-10-02 12:36:43.718 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:43 np0005466030 nova_compute[230518]: 2025-10-02 12:36:43.718 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:36:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:36:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:44.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:36:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:45 np0005466030 nova_compute[230518]: 2025-10-02 12:36:45.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:46 np0005466030 nova_compute[230518]: 2025-10-02 12:36:46.666 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:46 np0005466030 nova_compute[230518]: 2025-10-02 12:36:46.854 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:46.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:46 np0005466030 kernel: tapb31bc9d2-55 (unregistering): left promiscuous mode
Oct  2 08:36:46 np0005466030 NetworkManager[44960]: <info>  [1759408606.9668] device (tapb31bc9d2-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:36:46 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:46Z|00407|binding|INFO|Releasing lport b31bc9d2-5589-460c-9a78-a1d800087345 from this chassis (sb_readonly=0)
Oct  2 08:36:46 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:46Z|00408|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 down in Southbound
Oct  2 08:36:46 np0005466030 nova_compute[230518]: 2025-10-02 12:36:46.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:46 np0005466030 ovn_controller[129257]: 2025-10-02T12:36:46Z|00409|binding|INFO|Removing iface tapb31bc9d2-55 ovn-installed in OVS
Oct  2 08:36:46 np0005466030 nova_compute[230518]: 2025-10-02 12:36:46.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.987 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:36:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.988 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis#033[00m
Oct  2 08:36:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.990 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:36:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.994 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f72824-45aa-455c-adeb-f80e7667fbb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.995 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:46.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct  2 08:36:47 np0005466030 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005d.scope: Consumed 11.189s CPU time.
Oct  2 08:36:47 np0005466030 systemd-machined[188247]: Machine qemu-47-instance-0000005d terminated.
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : haproxy version is 2.8.14-c23fe91
Oct  2 08:36:47 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : path to executable is /usr/sbin/haproxy
Oct  2 08:36:47 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [WARNING]  (267381) : Exiting Master process...
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [ALERT]    (267381) : Current worker (267385) exited with code 143 (Terminated)
Oct  2 08:36:47 np0005466030 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [WARNING]  (267381) : All workers exited. Exiting... (0)
Oct  2 08:36:47 np0005466030 systemd[1]: libpod-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope: Deactivated successfully.
Oct  2 08:36:47 np0005466030 conmon[267352]: conmon 8d5a5a907005fc67cc33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope/container/memory.events
Oct  2 08:36:47 np0005466030 podman[267511]: 2025-10-02 12:36:47.133671045 +0000 UTC m=+0.052142091 container died 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.147 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance destroyed successfully.#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.148 2 DEBUG nova.objects.instance [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:47 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:36:47 np0005466030 systemd[1]: var-lib-containers-storage-overlay-3df5e31392e82218d0f29695cbcb785ef01003b7693d75e9e22f28e44e1b45cf-merged.mount: Deactivated successfully.
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.171 2 DEBUG nova.virt.libvirt.vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.171 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.173 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.173 2 DEBUG os_vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.175 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb31bc9d2-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.180 2 INFO os_vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')#033[00m
Oct  2 08:36:47 np0005466030 podman[267511]: 2025-10-02 12:36:47.18148721 +0000 UTC m=+0.099958246 container cleanup 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.184 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.184 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:47 np0005466030 systemd[1]: libpod-conmon-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope: Deactivated successfully.
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.228 2 DEBUG nova.objects.instance [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:36:47 np0005466030 podman[267549]: 2025-10-02 12:36:47.249730507 +0000 UTC m=+0.046205445 container remove 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.253 2 DEBUG nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.253 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.253 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.254 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.254 2 DEBUG nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.254 2 WARNING nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3066fd6-6a79-4d15-8a7f-79583da5cb21]: (4, ('Thu Oct  2 12:36:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3)\n8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3\nThu Oct  2 12:36:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3)\n8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.259 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fa009f-a048-41e1-8eb3-6fa4b234cd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.260 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:36:47 np0005466030 kernel: tapf011efa4-00: left promiscuous mode
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee03e1d-9744-4751-adf0-86b92464f537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.313 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[912fe4cb-ae10-4dc1-a1e0-94cc412c4fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.314 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7be7ab24-80c9-4c1a-8821-4609448d9be7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.330 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.335 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aaadb1ce-e8d5-428b-bed7-9ff68213976d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645930, 'reachable_time': 29469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267564, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:47 np0005466030 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.337 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:36:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.338 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c8965c42-7631-487c-97ee-6503e384be80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:36:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1057747748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.755 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:47 np0005466030 nova_compute[230518]: 2025-10-02 12:36:47.761 2 DEBUG nova.compute.provider_tree [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:48 np0005466030 nova_compute[230518]: 2025-10-02 12:36:48.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:48 np0005466030 nova_compute[230518]: 2025-10-02 12:36:48.086 2 DEBUG nova.scheduler.client.report [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:48 np0005466030 nova_compute[230518]: 2025-10-02 12:36:48.188 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:48.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:48.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.075 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.392 2 DEBUG nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.393 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.393 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.394 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.394 2 DEBUG nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.394 2 WARNING nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:36:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4257555595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.548 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.733 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.734 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4490MB free_disk=20.921703338623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:36:49 np0005466030 nova_compute[230518]: 2025-10-02 12:36:49.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.291 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.291 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.327 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:36:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3035015947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.818 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.823 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.883 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:36:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:50.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.933 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:36:50 np0005466030 nova_compute[230518]: 2025-10-02 12:36:50.934 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:36:52 np0005466030 nova_compute[230518]: 2025-10-02 12:36:52.141 2 DEBUG nova.compute.manager [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:36:52 np0005466030 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG nova.compute.manager [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:36:52 np0005466030 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:36:52 np0005466030 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:36:52 np0005466030 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:36:52 np0005466030 nova_compute[230518]: 2025-10-02 12:36:52.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:53 np0005466030 nova_compute[230518]: 2025-10-02 12:36:53.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:53 np0005466030 nova_compute[230518]: 2025-10-02 12:36:53.909 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:36:54 np0005466030 podman[267634]: 2025-10-02 12:36:54.826436357 +0000 UTC m=+0.067808375 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:36:54 np0005466030 nova_compute[230518]: 2025-10-02 12:36:54.876 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:36:54 np0005466030 nova_compute[230518]: 2025-10-02 12:36:54.876 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:36:54 np0005466030 podman[267633]: 2025-10-02 12:36:54.896327186 +0000 UTC m=+0.131747656 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:36:54 np0005466030 nova_compute[230518]: 2025-10-02 12:36:54.909 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:36:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:55 np0005466030 nova_compute[230518]: 2025-10-02 12:36:55.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:56 np0005466030 nova_compute[230518]: 2025-10-02 12:36:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:56 np0005466030 nova_compute[230518]: 2025-10-02 12:36:56.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:56 np0005466030 nova_compute[230518]: 2025-10-02 12:36:56.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:36:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:56.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:36:57 np0005466030 nova_compute[230518]: 2025-10-02 12:36:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:36:57 np0005466030 nova_compute[230518]: 2025-10-02 12:36:57.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:36:57 np0005466030 nova_compute[230518]: 2025-10-02 12:36:57.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Oct  2 08:36:58 np0005466030 nova_compute[230518]: 2025-10-02 12:36:58.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:36:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:58.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:36:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:36:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:58.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:36:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:00.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:00.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:01 np0005466030 nova_compute[230518]: 2025-10-02 12:37:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:01 np0005466030 nova_compute[230518]: 2025-10-02 12:37:01.310 2 DEBUG nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:01 np0005466030 nova_compute[230518]: 2025-10-02 12:37:01.311 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:01 np0005466030 nova_compute[230518]: 2025-10-02 12:37:01.311 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:01 np0005466030 nova_compute[230518]: 2025-10-02 12:37:01.311 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:01 np0005466030 nova_compute[230518]: 2025-10-02 12:37:01.312 2 DEBUG nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:01 np0005466030 nova_compute[230518]: 2025-10-02 12:37:01.312 2 WARNING nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:37:02 np0005466030 nova_compute[230518]: 2025-10-02 12:37:02.145 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408607.1438193, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:02 np0005466030 nova_compute[230518]: 2025-10-02 12:37:02.146 2 INFO nova.compute.manager [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:37:02 np0005466030 nova_compute[230518]: 2025-10-02 12:37:02.168 2 DEBUG nova.compute.manager [None req-3ed406af-64b2-4085-a977-134358c323ed - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:02 np0005466030 nova_compute[230518]: 2025-10-02 12:37:02.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:02.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:02.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:03 np0005466030 nova_compute[230518]: 2025-10-02 12:37:03.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:04 np0005466030 nova_compute[230518]: 2025-10-02 12:37:04.770 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:04 np0005466030 nova_compute[230518]: 2025-10-02 12:37:04.770 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:04 np0005466030 nova_compute[230518]: 2025-10-02 12:37:04.829 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:37:04 np0005466030 nova_compute[230518]: 2025-10-02 12:37:04.933 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:04 np0005466030 nova_compute[230518]: 2025-10-02 12:37:04.934 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:04.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:04 np0005466030 nova_compute[230518]: 2025-10-02 12:37:04.946 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:37:04 np0005466030 nova_compute[230518]: 2025-10-02 12:37:04.947 2 INFO nova.compute.claims [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:37:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:05 np0005466030 nova_compute[230518]: 2025-10-02 12:37:05.186 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:37:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:37:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:37:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:37:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4173785915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:05 np0005466030 nova_compute[230518]: 2025-10-02 12:37:05.670 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:05 np0005466030 nova_compute[230518]: 2025-10-02 12:37:05.678 2 DEBUG nova.compute.provider_tree [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:05 np0005466030 nova_compute[230518]: 2025-10-02 12:37:05.788 2 DEBUG nova.scheduler.client.report [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.008 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.009 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.114 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.114 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.145 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.146 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.197 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.244 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.394 2 DEBUG nova.policy [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1215de74baa4b7f8522ec44b7a4630b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '15fd01e26e294206846c155a766b0ad2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.416 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.417 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.418 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Creating image(s)#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.450 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.484 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.514 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.518 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.616 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.617 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.617 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.618 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.648 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.653 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:06 np0005466030 podman[267789]: 2025-10-02 12:37:06.809802581 +0000 UTC m=+0.059023768 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:06 np0005466030 podman[267790]: 2025-10-02 12:37:06.828554401 +0000 UTC m=+0.079634937 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:37:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:06.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:06 np0005466030 nova_compute[230518]: 2025-10-02 12:37:06.984 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.076 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] resizing rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.212 2 DEBUG nova.objects.instance [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lazy-loading 'migration_context' on Instance uuid 1319f89a-ec57-41aa-b53e-07f2280a0d87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.235 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.235 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Ensure instance console log exists: /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.236 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.237 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.237 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:07 np0005466030 nova_compute[230518]: 2025-10-02 12:37:07.326 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Successfully created port: 8de4019e-8174-4b43-9510-73318fd6ff8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.284 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Successfully updated port: 8de4019e-8174-4b43-9510-73318fd6ff8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.317 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.318 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquired lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.318 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.426 2 DEBUG nova.compute.manager [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-changed-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.426 2 DEBUG nova.compute.manager [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Refreshing instance network info cache due to event network-changed-8de4019e-8174-4b43-9510-73318fd6ff8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.426 2 DEBUG oslo_concurrency.lockutils [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:08 np0005466030 nova_compute[230518]: 2025-10-02 12:37:08.857 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:37:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:08.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:08.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.611 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updating instance_info_cache with network_info: [{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.631 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Releasing lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.631 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance network_info: |[{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.632 2 DEBUG oslo_concurrency.lockutils [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.632 2 DEBUG nova.network.neutron [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Refreshing network info cache for port 8de4019e-8174-4b43-9510-73318fd6ff8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.634 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start _get_guest_xml network_info=[{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.638 2 WARNING nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.644 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.645 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.657 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.661 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.666 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.666 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.667 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.668 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.668 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.668 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.670 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.670 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:37:10 np0005466030 nova_compute[230518]: 2025-10-02 12:37:10.676 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:10.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:10.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031192021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.188 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.236 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.243 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/850267506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.749 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.753 2 DEBUG nova.virt.libvirt.vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1958706210',display_name='tempest-ServerMetadataNegativeTestJSON-server-1958706210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1958706210',id=97,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15fd01e26e294206846c155a766b0ad2',ramdisk_id='',reservation_id='r-aqm70j3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-354053647',owner_user_name='tempest-ServerMetadataNegativeTestJSON-354053647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:06Z,user_data=None,user_id='f1215de74baa4b7f8522ec44b7a4630b',uuid=1319f89a-ec57-41aa-b53e-07f2280a0d87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.754 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converting VIF {"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.756 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.758 2 DEBUG nova.objects.instance [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1319f89a-ec57-41aa-b53e-07f2280a0d87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.801 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <uuid>1319f89a-ec57-41aa-b53e-07f2280a0d87</uuid>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <name>instance-00000061</name>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1958706210</nova:name>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:37:10</nova:creationTime>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:user uuid="f1215de74baa4b7f8522ec44b7a4630b">tempest-ServerMetadataNegativeTestJSON-354053647-project-member</nova:user>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:project uuid="15fd01e26e294206846c155a766b0ad2">tempest-ServerMetadataNegativeTestJSON-354053647</nova:project>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <nova:port uuid="8de4019e-8174-4b43-9510-73318fd6ff8d">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <entry name="serial">1319f89a-ec57-41aa-b53e-07f2280a0d87</entry>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <entry name="uuid">1319f89a-ec57-41aa-b53e-07f2280a0d87</entry>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/1319f89a-ec57-41aa-b53e-07f2280a0d87_disk">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:ee:67:9e"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <target dev="tap8de4019e-81"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/console.log" append="off"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:37:11 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:37:11 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:37:11 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:37:11 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.803 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Preparing to wait for external event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.804 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.804 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.804 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.805 2 DEBUG nova.virt.libvirt.vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1958706210',display_name='tempest-ServerMetadataNegativeTestJSON-server-1958706210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1958706210',id=97,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15fd01e26e294206846c155a766b0ad2',ramdisk_id='',reservation_id='r-aqm70j3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-354053647',owner_user_name='tempest-ServerMetadataNegativeTestJSON-354053647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:06Z,user_data=None,user_id='f1215de74baa4b7f8522ec44b7a4630b',uuid=1319f89a-ec57-41aa-b53e-07f2280a0d87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.806 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converting VIF {"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.807 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.807 2 DEBUG os_vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.813 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8de4019e-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8de4019e-81, col_values=(('external_ids', {'iface-id': '8de4019e-8174-4b43-9510-73318fd6ff8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:67:9e', 'vm-uuid': '1319f89a-ec57-41aa-b53e-07f2280a0d87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005466030 NetworkManager[44960]: <info>  [1759408631.8176] manager: (tap8de4019e-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.825 2 INFO os_vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81')#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.885 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.885 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.886 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] No VIF found with MAC fa:16:3e:ee:67:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.886 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Using config drive#033[00m
Oct  2 08:37:11 np0005466030 nova_compute[230518]: 2025-10-02 12:37:11.922 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:12 np0005466030 nova_compute[230518]: 2025-10-02 12:37:12.600 2 DEBUG nova.network.neutron [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updated VIF entry in instance network info cache for port 8de4019e-8174-4b43-9510-73318fd6ff8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:37:12 np0005466030 nova_compute[230518]: 2025-10-02 12:37:12.601 2 DEBUG nova.network.neutron [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updating instance_info_cache with network_info: [{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:12 np0005466030 nova_compute[230518]: 2025-10-02 12:37:12.619 2 DEBUG oslo_concurrency.lockutils [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:12 np0005466030 nova_compute[230518]: 2025-10-02 12:37:12.815 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Creating config drive at /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config#033[00m
Oct  2 08:37:12 np0005466030 nova_compute[230518]: 2025-10-02 12:37:12.824 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgczgmo1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:12.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:12.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:12 np0005466030 nova_compute[230518]: 2025-10-02 12:37:12.978 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgczgmo1s" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.013 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.017 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.218 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.219 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deleting local config drive /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config because it was imported into RBD.#033[00m
Oct  2 08:37:13 np0005466030 kernel: tap8de4019e-81: entered promiscuous mode
Oct  2 08:37:13 np0005466030 NetworkManager[44960]: <info>  [1759408633.2761] manager: (tap8de4019e-81): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct  2 08:37:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:13Z|00410|binding|INFO|Claiming lport 8de4019e-8174-4b43-9510-73318fd6ff8d for this chassis.
Oct  2 08:37:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:13Z|00411|binding|INFO|8de4019e-8174-4b43-9510-73318fd6ff8d: Claiming fa:16:3e:ee:67:9e 10.100.0.8
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.290 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:9e 10.100.0.8'], port_security=['fa:16:3e:ee:67:9e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1319f89a-ec57-41aa-b53e-07f2280a0d87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40e5ac91-4365-415e-86f3-b8d99d311f47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15fd01e26e294206846c155a766b0ad2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '010fb747-de4b-49ec-8a2a-286d666ddcca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5628d560-7d90-4922-8442-872cb81c1d7b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de4019e-8174-4b43-9510-73318fd6ff8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.291 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de4019e-8174-4b43-9510-73318fd6ff8d in datapath 40e5ac91-4365-415e-86f3-b8d99d311f47 bound to our chassis#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.293 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40e5ac91-4365-415e-86f3-b8d99d311f47#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.312 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[00a8bbb2-c10b-433d-af55-7197773b2095]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.313 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap40e5ac91-41 in ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.316 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap40e5ac91-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.316 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81c045e8-9e93-4199-9d80-bfff73e92fc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2364200-1cc7-4506-8fb9-da1c40230267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 systemd-udevd[268038]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:13 np0005466030 systemd-machined[188247]: New machine qemu-48-instance-00000061.
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.338 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[91e5e837-446f-4189-86a7-275eb627da30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 NetworkManager[44960]: <info>  [1759408633.3414] device (tap8de4019e-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:13 np0005466030 NetworkManager[44960]: <info>  [1759408633.3427] device (tap8de4019e-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 systemd[1]: Started Virtual Machine qemu-48-instance-00000061.
Oct  2 08:37:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:13Z|00412|binding|INFO|Setting lport 8de4019e-8174-4b43-9510-73318fd6ff8d ovn-installed in OVS
Oct  2 08:37:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:13Z|00413|binding|INFO|Setting lport 8de4019e-8174-4b43-9510-73318fd6ff8d up in Southbound
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.378 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0929faf-12c4-4441-b114-b05b765f2edb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.417 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0481578c-ad2d-4000-8e89-198ee98b7daf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 systemd-udevd[268042]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:13 np0005466030 NetworkManager[44960]: <info>  [1759408633.4246] manager: (tap40e5ac91-40): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e7336c1e-d22d-4e86-82a7-3809610aea31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.464 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3eadde05-5878-40c7-9993-8480084f627b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.468 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c28e5d5-b76a-4276-ad59-1e5e366552df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 NetworkManager[44960]: <info>  [1759408633.5058] device (tap40e5ac91-40): carrier: link connected
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.516 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6f872d00-b099-4fd4-bd68-5cb58f85dc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.541 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[67ae7c05-5442-4867-af35-147b302fa309]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40e5ac91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:36:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649706, 'reachable_time': 37352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268071, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.557 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcad257-fdcc-4c68-b453-e35763d00141]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:3672'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649706, 'tstamp': 649706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268086, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c285701a-864f-4dbc-a63c-9bc094f229fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40e5ac91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:36:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649706, 'reachable_time': 37352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268089, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.636 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1dae05e0-c3e6-4347-a33d-f6727a5d9183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.712 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c32eba40-ff81-4408-8099-a4292f16d2c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.713 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40e5ac91-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.714 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.714 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40e5ac91-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 NetworkManager[44960]: <info>  [1759408633.7180] manager: (tap40e5ac91-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct  2 08:37:13 np0005466030 kernel: tap40e5ac91-40: entered promiscuous mode
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.722 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40e5ac91-40, col_values=(('external_ids', {'iface-id': 'eb0bb96d-21f2-4509-98d2-52f4693069e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:13Z|00414|binding|INFO|Releasing lport eb0bb96d-21f2-4509-98d2-52f4693069e8 from this chassis (sb_readonly=0)
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.739 2 DEBUG nova.compute.manager [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.739 2 DEBUG oslo_concurrency.lockutils [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.740 2 DEBUG oslo_concurrency.lockutils [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.740 2 DEBUG oslo_concurrency.lockutils [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.741 2 DEBUG nova.compute.manager [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Processing event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:37:13 np0005466030 nova_compute[230518]: 2025-10-02 12:37:13.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.755 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/40e5ac91-4365-415e-86f3-b8d99d311f47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/40e5ac91-4365-415e-86f3-b8d99d311f47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.756 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c44ffc63-c38e-420b-aad7-f1735979a7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.757 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-40e5ac91-4365-415e-86f3-b8d99d311f47
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/40e5ac91-4365-415e-86f3-b8d99d311f47.pid.haproxy
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 40e5ac91-4365-415e-86f3-b8d99d311f47
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:37:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.757 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'env', 'PROCESS_TAG=haproxy-40e5ac91-4365-415e-86f3-b8d99d311f47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/40e5ac91-4365-415e-86f3-b8d99d311f47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:37:14 np0005466030 podman[268147]: 2025-10-02 12:37:14.196331867 +0000 UTC m=+0.057681826 container create a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.199 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408634.1990738, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.200 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.202 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.211 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.214 2 INFO nova.virt.libvirt.driver [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance spawned successfully.#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.215 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.241 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:14 np0005466030 systemd[1]: Started libpod-conmon-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17.scope.
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.248 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.255 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.255 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.256 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.257 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.258 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.258 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:14 np0005466030 podman[268147]: 2025-10-02 12:37:14.167305424 +0000 UTC m=+0.028655413 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:37:14 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.283 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.284 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408634.1993606, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.284 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:37:14 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae0f4a13ce66dd70ea553c8c5cf8b3214d2cc0beca828a9fd706491dc16fdc8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:37:14 np0005466030 podman[268147]: 2025-10-02 12:37:14.299839704 +0000 UTC m=+0.161189683 container init a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:37:14 np0005466030 podman[268147]: 2025-10-02 12:37:14.305292265 +0000 UTC m=+0.166642224 container start a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.309 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.318 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408634.2095737, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.318 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:14 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : New worker (268169) forked
Oct  2 08:37:14 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : Loading success.
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.341 2 INFO nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 7.93 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.342 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.344 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.349 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.383 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.426 2 INFO nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 9.53 seconds to build instance.#033[00m
Oct  2 08:37:14 np0005466030 nova_compute[230518]: 2025-10-02 12:37:14.447 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:37:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:14.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:37:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:14.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:15 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  2 08:37:15 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  2 08:37:15 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  2 08:37:15 np0005466030 nova_compute[230518]: 2025-10-02 12:37:15.908 2 DEBUG nova.compute.manager [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:15 np0005466030 nova_compute[230518]: 2025-10-02 12:37:15.909 2 DEBUG oslo_concurrency.lockutils [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:15 np0005466030 nova_compute[230518]: 2025-10-02 12:37:15.909 2 DEBUG oslo_concurrency.lockutils [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:15 np0005466030 nova_compute[230518]: 2025-10-02 12:37:15.909 2 DEBUG oslo_concurrency.lockutils [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:15 np0005466030 nova_compute[230518]: 2025-10-02 12:37:15.910 2 DEBUG nova.compute.manager [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] No waiting events found dispatching network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:15 np0005466030 nova_compute[230518]: 2025-10-02 12:37:15.910 2 WARNING nova.compute.manager [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received unexpected event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.956603) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635956668, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1498, "num_deletes": 261, "total_data_size": 3132025, "memory_usage": 3165520, "flush_reason": "Manual Compaction"}
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635978607, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 2055198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42413, "largest_seqno": 43906, "table_properties": {"data_size": 2048960, "index_size": 3377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14033, "raw_average_key_size": 20, "raw_value_size": 2036037, "raw_average_value_size": 2904, "num_data_blocks": 149, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408521, "oldest_key_time": 1759408521, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 22046 microseconds, and 5281 cpu microseconds.
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.978655) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 2055198 bytes OK
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.978677) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.984559) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.984592) EVENT_LOG_v1 {"time_micros": 1759408635984584, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.984613) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3124938, prev total WAL file size 3124938, number of live WAL files 2.
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.985690) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(2007KB)], [81(9043KB)]
Oct  2 08:37:15 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635985726, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 11316232, "oldest_snapshot_seqno": -1}
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 6819 keys, 11169866 bytes, temperature: kUnknown
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636075888, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 11169866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11123434, "index_size": 28272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 175488, "raw_average_key_size": 25, "raw_value_size": 11000659, "raw_average_value_size": 1613, "num_data_blocks": 1128, "num_entries": 6819, "num_filter_entries": 6819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.076365) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11169866 bytes
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.078034) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.3 rd, 123.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 7358, records dropped: 539 output_compression: NoCompression
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.078059) EVENT_LOG_v1 {"time_micros": 1759408636078047, "job": 50, "event": "compaction_finished", "compaction_time_micros": 90316, "compaction_time_cpu_micros": 32015, "output_level": 6, "num_output_files": 1, "total_output_size": 11169866, "num_input_records": 7358, "num_output_records": 6819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636078634, "job": 50, "event": "table_file_deletion", "file_number": 83}
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636080574, "job": 50, "event": "table_file_deletion", "file_number": 81}
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.985623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:16 np0005466030 nova_compute[230518]: 2025-10-02 12:37:16.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:16.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:16.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:18 np0005466030 nova_compute[230518]: 2025-10-02 12:37:18.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:18.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:18.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.224 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.224 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.225 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.225 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.226 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.228 2 INFO nova.compute.manager [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Terminating instance#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.230 2 DEBUG nova.compute.manager [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:37:19 np0005466030 kernel: tap8de4019e-81 (unregistering): left promiscuous mode
Oct  2 08:37:19 np0005466030 NetworkManager[44960]: <info>  [1759408639.2791] device (tap8de4019e-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:37:19 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:19Z|00415|binding|INFO|Releasing lport 8de4019e-8174-4b43-9510-73318fd6ff8d from this chassis (sb_readonly=0)
Oct  2 08:37:19 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:19Z|00416|binding|INFO|Setting lport 8de4019e-8174-4b43-9510-73318fd6ff8d down in Southbound
Oct  2 08:37:19 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:19Z|00417|binding|INFO|Removing iface tap8de4019e-81 ovn-installed in OVS
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.303 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:9e 10.100.0.8'], port_security=['fa:16:3e:ee:67:9e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1319f89a-ec57-41aa-b53e-07f2280a0d87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40e5ac91-4365-415e-86f3-b8d99d311f47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15fd01e26e294206846c155a766b0ad2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '010fb747-de4b-49ec-8a2a-286d666ddcca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5628d560-7d90-4922-8442-872cb81c1d7b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de4019e-8174-4b43-9510-73318fd6ff8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.305 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de4019e-8174-4b43-9510-73318fd6ff8d in datapath 40e5ac91-4365-415e-86f3-b8d99d311f47 unbound from our chassis#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.307 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40e5ac91-4365-415e-86f3-b8d99d311f47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.308 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dac38d8-4f56-41cb-80df-f290183c7b08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.308 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 namespace which is not needed anymore#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466030 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct  2 08:37:19 np0005466030 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000061.scope: Consumed 5.864s CPU time.
Oct  2 08:37:19 np0005466030 systemd-machined[188247]: Machine qemu-48-instance-00000061 terminated.
Oct  2 08:37:19 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : haproxy version is 2.8.14-c23fe91
Oct  2 08:37:19 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : path to executable is /usr/sbin/haproxy
Oct  2 08:37:19 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [WARNING]  (268167) : Exiting Master process...
Oct  2 08:37:19 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [WARNING]  (268167) : Exiting Master process...
Oct  2 08:37:19 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [ALERT]    (268167) : Current worker (268169) exited with code 143 (Terminated)
Oct  2 08:37:19 np0005466030 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [WARNING]  (268167) : All workers exited. Exiting... (0)
Oct  2 08:37:19 np0005466030 systemd[1]: libpod-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17.scope: Deactivated successfully.
Oct  2 08:37:19 np0005466030 podman[268202]: 2025-10-02 12:37:19.452898708 +0000 UTC m=+0.049296912 container died a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.472 2 INFO nova.virt.libvirt.driver [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance destroyed successfully.#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.472 2 DEBUG nova.objects.instance [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lazy-loading 'resources' on Instance uuid 1319f89a-ec57-41aa-b53e-07f2280a0d87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:19 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17-userdata-shm.mount: Deactivated successfully.
Oct  2 08:37:19 np0005466030 systemd[1]: var-lib-containers-storage-overlay-2ae0f4a13ce66dd70ea553c8c5cf8b3214d2cc0beca828a9fd706491dc16fdc8-merged.mount: Deactivated successfully.
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.489 2 DEBUG nova.virt.libvirt.vif [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1958706210',display_name='tempest-ServerMetadataNegativeTestJSON-server-1958706210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1958706210',id=97,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15fd01e26e294206846c155a766b0ad2',ramdisk_id='',reservation_id='r-aqm70j3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-354053647',owner_user_name='tempest-ServerMetadataNegativeTestJSON-354053647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:37:14Z,user_data=None,user_id='f1215de74baa4b7f8522ec44b7a4630b',uuid=1319f89a-ec57-41aa-b53e-07f2280a0d87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.490 2 DEBUG nova.network.os_vif_util [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converting VIF {"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.490 2 DEBUG nova.network.os_vif_util [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.491 2 DEBUG os_vif [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8de4019e-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.500 2 INFO os_vif [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81')#033[00m
Oct  2 08:37:19 np0005466030 podman[268202]: 2025-10-02 12:37:19.501319631 +0000 UTC m=+0.097717835 container cleanup a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:19 np0005466030 systemd[1]: libpod-conmon-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17.scope: Deactivated successfully.
Oct  2 08:37:19 np0005466030 podman[268254]: 2025-10-02 12:37:19.59281065 +0000 UTC m=+0.056256991 container remove a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.601 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f1032b42-5452-481f-bb7d-8f4c2495ff97]: (4, ('Thu Oct  2 12:37:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 (a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17)\na7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17\nThu Oct  2 12:37:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 (a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17)\na7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.603 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4a36f5cc-a241-4f24-b39a-bdb356e3afad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.604 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40e5ac91-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466030 kernel: tap40e5ac91-40: left promiscuous mode
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.669 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e945d78-3cd6-42a2-8d7b-6f12e6f779bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.703 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[186676ad-ba2a-4e6a-ab5d-2351a3747bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.705 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aad433e3-d534-438c-b465-db6012d81cb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.733 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c75f9189-36f5-49ff-85ec-dbef93021552]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649696, 'reachable_time': 25059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268277, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.735 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:37:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.736 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[030540ce-3af1-4ac8-85b5-d06292d135cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:19 np0005466030 systemd[1]: run-netns-ovnmeta\x2d40e5ac91\x2d4365\x2d415e\x2d86f3\x2db8d99d311f47.mount: Deactivated successfully.
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.980 2 DEBUG nova.compute.manager [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-unplugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.981 2 DEBUG oslo_concurrency.lockutils [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.981 2 DEBUG oslo_concurrency.lockutils [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.981 2 DEBUG oslo_concurrency.lockutils [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.982 2 DEBUG nova.compute.manager [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] No waiting events found dispatching network-vif-unplugged-8de4019e-8174-4b43-9510-73318fd6ff8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.982 2 DEBUG nova.compute.manager [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-unplugged-8de4019e-8174-4b43-9510-73318fd6ff8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.990 2 INFO nova.virt.libvirt.driver [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deleting instance files /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87_del#033[00m
Oct  2 08:37:19 np0005466030 nova_compute[230518]: 2025-10-02 12:37:19.991 2 INFO nova.virt.libvirt.driver [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deletion of /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87_del complete#033[00m
Oct  2 08:37:20 np0005466030 nova_compute[230518]: 2025-10-02 12:37:20.061 2 INFO nova.compute.manager [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:37:20 np0005466030 nova_compute[230518]: 2025-10-02 12:37:20.061 2 DEBUG oslo.service.loopingcall [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:37:20 np0005466030 nova_compute[230518]: 2025-10-02 12:37:20.062 2 DEBUG nova.compute.manager [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:37:20 np0005466030 nova_compute[230518]: 2025-10-02 12:37:20.062 2 DEBUG nova.network.neutron [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:37:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:20.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:21 np0005466030 nova_compute[230518]: 2025-10-02 12:37:21.632 2 DEBUG nova.network.neutron [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:21 np0005466030 nova_compute[230518]: 2025-10-02 12:37:21.670 2 INFO nova.compute.manager [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 1.61 seconds to deallocate network for instance.#033[00m
Oct  2 08:37:21 np0005466030 nova_compute[230518]: 2025-10-02 12:37:21.737 2 DEBUG nova.compute.manager [req-59c4f70a-5f71-4f1c-989c-d032a4d4de25 req-8924b32f-1ba4-4fbc-8cca-842993e5f02c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-deleted-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:21 np0005466030 nova_compute[230518]: 2025-10-02 12:37:21.741 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:21 np0005466030 nova_compute[230518]: 2025-10-02 12:37:21.742 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:21 np0005466030 nova_compute[230518]: 2025-10-02 12:37:21.824 2 DEBUG oslo_concurrency.processutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.138 2 DEBUG nova.compute.manager [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.138 2 DEBUG oslo_concurrency.lockutils [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.138 2 DEBUG oslo_concurrency.lockutils [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.139 2 DEBUG oslo_concurrency.lockutils [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.139 2 DEBUG nova.compute.manager [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] No waiting events found dispatching network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.139 2 WARNING nova.compute.manager [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received unexpected event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d for instance with vm_state deleted and task_state None.#033[00m
Oct  2 08:37:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2159854273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.313 2 DEBUG oslo_concurrency.processutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.320 2 DEBUG nova.compute.provider_tree [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.339 2 DEBUG nova.scheduler.client.report [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.372 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.428 2 INFO nova.scheduler.client.report [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Deleted allocations for instance 1319f89a-ec57-41aa-b53e-07f2280a0d87#033[00m
Oct  2 08:37:22 np0005466030 nova_compute[230518]: 2025-10-02 12:37:22.555 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:22.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:23 np0005466030 nova_compute[230518]: 2025-10-02 12:37:23.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:24 np0005466030 nova_compute[230518]: 2025-10-02 12:37:24.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:25 np0005466030 podman[268303]: 2025-10-02 12:37:25.839451042 +0000 UTC m=+0.069285651 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:37:25 np0005466030 podman[268302]: 2025-10-02 12:37:25.883246709 +0000 UTC m=+0.116567398 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct  2 08:37:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Oct  2 08:37:28 np0005466030 nova_compute[230518]: 2025-10-02 12:37:28.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:28.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:29 np0005466030 nova_compute[230518]: 2025-10-02 12:37:29.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:30 np0005466030 nova_compute[230518]: 2025-10-02 12:37:30.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Oct  2 08:37:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Oct  2 08:37:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:33 np0005466030 nova_compute[230518]: 2025-10-02 12:37:33.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005466030 nova_compute[230518]: 2025-10-02 12:37:34.468 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408639.4644618, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:34 np0005466030 nova_compute[230518]: 2025-10-02 12:37:34.469 2 INFO nova.compute.manager [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:37:34 np0005466030 nova_compute[230518]: 2025-10-02 12:37:34.494 2 DEBUG nova.compute.manager [None req-77f28c72-c2c7-4c12-bf36-1910976c5e75 - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:34 np0005466030 nova_compute[230518]: 2025-10-02 12:37:34.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:34.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:35.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.066041) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656066135, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 493, "num_deletes": 251, "total_data_size": 634921, "memory_usage": 645576, "flush_reason": "Manual Compaction"}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656073449, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 418588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43911, "largest_seqno": 44399, "table_properties": {"data_size": 415917, "index_size": 707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 410545, "raw_average_value_size": 1200, "num_data_blocks": 31, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408636, "oldest_key_time": 1759408636, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 7457 microseconds, and 2624 cpu microseconds.
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.073506) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 418588 bytes OK
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.073550) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076073) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076133) EVENT_LOG_v1 {"time_micros": 1759408656076121, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 631951, prev total WAL file size 631951, number of live WAL files 2.
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(408KB)], [84(10MB)]
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656076803, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11588454, "oldest_snapshot_seqno": -1}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 6647 keys, 9628382 bytes, temperature: kUnknown
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656132111, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9628382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584643, "index_size": 26027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172641, "raw_average_key_size": 25, "raw_value_size": 9466374, "raw_average_value_size": 1424, "num_data_blocks": 1026, "num_entries": 6647, "num_filter_entries": 6647, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.132630) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9628382 bytes
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.134364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.0 rd, 173.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(50.7) write-amplify(23.0) OK, records in: 7161, records dropped: 514 output_compression: NoCompression
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.134412) EVENT_LOG_v1 {"time_micros": 1759408656134391, "job": 52, "event": "compaction_finished", "compaction_time_micros": 55442, "compaction_time_cpu_micros": 22692, "output_level": 6, "num_output_files": 1, "total_output_size": 9628382, "num_input_records": 7161, "num_output_records": 6647, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656134870, "job": 52, "event": "table_file_deletion", "file_number": 86}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656139758, "job": 52, "event": "table_file_deletion", "file_number": 84}
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:37:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:37.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:37 np0005466030 podman[268349]: 2025-10-02 12:37:37.866528158 +0000 UTC m=+0.096157346 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:37:37 np0005466030 podman[268350]: 2025-10-02 12:37:37.873862389 +0000 UTC m=+0.097303352 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:37:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:38.066 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:38 np0005466030 nova_compute[230518]: 2025-10-02 12:37:38.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:38.067 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:37:38 np0005466030 nova_compute[230518]: 2025-10-02 12:37:38.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:39.069 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:39 np0005466030 nova_compute[230518]: 2025-10-02 12:37:39.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:40.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Oct  2 08:37:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:41.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:37:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:37:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:42.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:43.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:43 np0005466030 nova_compute[230518]: 2025-10-02 12:37:43.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.360 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.361 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.381 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.453 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.453 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.460 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.460 2 INFO nova.compute.claims [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:37:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:44 np0005466030 nova_compute[230518]: 2025-10-02 12:37:44.552 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:44.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1181985338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:45.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.023 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.031 2 DEBUG nova.compute.provider_tree [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.050 2 DEBUG nova.scheduler.client.report [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.090 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.091 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.139 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.139 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.158 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.181 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.304 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.306 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.306 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Creating image(s)#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.344 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.377 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.407 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.412 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.513 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.514 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.515 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.516 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.545 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.548 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1979f95-4814-4098-8baa-6c4497f20612_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:45 np0005466030 nova_compute[230518]: 2025-10-02 12:37:45.586 2 DEBUG nova.policy [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e28e4c343f46426788534c9108c8a7a8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4179ec2dfcf6411faefd3d7d7e6356d0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:37:46 np0005466030 nova_compute[230518]: 2025-10-02 12:37:46.727 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1979f95-4814-4098-8baa-6c4497f20612_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:46 np0005466030 nova_compute[230518]: 2025-10-02 12:37:46.777 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Successfully created port: 542026c4-106b-4023-8944-0947d2ba1fb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:37:46 np0005466030 nova_compute[230518]: 2025-10-02 12:37:46.838 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] resizing rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:37:46 np0005466030 nova_compute[230518]: 2025-10-02 12:37:46.984 2 DEBUG nova.objects.instance [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lazy-loading 'migration_context' on Instance uuid a1979f95-4814-4098-8baa-6c4497f20612 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.009 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.010 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Ensure instance console log exists: /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:37:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.034 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.035 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.036 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.940 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Successfully updated port: 542026c4-106b-4023-8944-0947d2ba1fb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.959 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.960 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquired lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:47 np0005466030 nova_compute[230518]: 2025-10-02 12:37:47.960 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:37:48 np0005466030 nova_compute[230518]: 2025-10-02 12:37:48.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:48 np0005466030 nova_compute[230518]: 2025-10-02 12:37:48.120 2 DEBUG nova.compute.manager [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-changed-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:48 np0005466030 nova_compute[230518]: 2025-10-02 12:37:48.120 2 DEBUG nova.compute.manager [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Refreshing instance network info cache due to event network-changed-542026c4-106b-4023-8944-0947d2ba1fb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:37:48 np0005466030 nova_compute[230518]: 2025-10-02 12:37:48.120 2 DEBUG oslo_concurrency.lockutils [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:37:48 np0005466030 nova_compute[230518]: 2025-10-02 12:37:48.222 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:37:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:49.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.073 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updating instance_info_cache with network_info: [{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.098 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Releasing lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.098 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance network_info: |[{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.099 2 DEBUG oslo_concurrency.lockutils [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.099 2 DEBUG nova.network.neutron [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Refreshing network info cache for port 542026c4-106b-4023-8944-0947d2ba1fb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.104 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start _get_guest_xml network_info=[{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.112 2 WARNING nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.123 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.124 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.132 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.133 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.134 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.134 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.135 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.135 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.135 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.137 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.137 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.140 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/811997453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.616 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.659 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:49 np0005466030 nova_compute[230518]: 2025-10-02 12:37:49.666 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:37:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3644381549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.174 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.176 2 DEBUG nova.virt.libvirt.vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-2144240005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-2144240005',id=100,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4179ec2dfcf6411faefd3d7d7e6356d0',ramdisk_id='',reservation_id='r-nntdzlwu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-416799390',owner_user_name='tempest-ServerTagsTestJSON-416799390-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:45Z,user_data=None,user_id='e28e4c343f46426788534c9108c8a7a8',uuid=a1979f95-4814-4098-8baa-6c4497f20612,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.177 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converting VIF {"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.178 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.179 2 DEBUG nova.objects.instance [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1979f95-4814-4098-8baa-6c4497f20612 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.196 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <uuid>a1979f95-4814-4098-8baa-6c4497f20612</uuid>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <name>instance-00000064</name>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerTagsTestJSON-server-2144240005</nova:name>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:37:49</nova:creationTime>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:user uuid="e28e4c343f46426788534c9108c8a7a8">tempest-ServerTagsTestJSON-416799390-project-member</nova:user>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:project uuid="4179ec2dfcf6411faefd3d7d7e6356d0">tempest-ServerTagsTestJSON-416799390</nova:project>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <nova:port uuid="542026c4-106b-4023-8944-0947d2ba1fb9">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <entry name="serial">a1979f95-4814-4098-8baa-6c4497f20612</entry>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <entry name="uuid">a1979f95-4814-4098-8baa-6c4497f20612</entry>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1979f95-4814-4098-8baa-6c4497f20612_disk">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1979f95-4814-4098-8baa-6c4497f20612_disk.config">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:1a:32:6b"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <target dev="tap542026c4-10"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/console.log" append="off"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:37:50 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:37:50 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:37:50 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:37:50 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.197 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Preparing to wait for external event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.198 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.198 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.198 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.199 2 DEBUG nova.virt.libvirt.vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-2144240005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-2144240005',id=100,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4179ec2dfcf6411faefd3d7d7e6356d0',ramdisk_id='',reservation_id='r-nntdzlwu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-416799390',owner_user_name='tempest-ServerTagsTestJSON-416799390-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:45Z,user_data=None,user_id='e28e4c343f46426788534c9108c8a7a8',uuid=a1979f95-4814-4098-8baa-6c4497f20612,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.199 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converting VIF {"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.199 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.200 2 DEBUG os_vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap542026c4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap542026c4-10, col_values=(('external_ids', {'iface-id': '542026c4-106b-4023-8944-0947d2ba1fb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:32:6b', 'vm-uuid': 'a1979f95-4814-4098-8baa-6c4497f20612'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:37:50 np0005466030 NetworkManager[44960]: <info>  [1759408670.2073] manager: (tap542026c4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.213 2 INFO os_vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10')#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.267 2 DEBUG nova.network.neutron [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updated VIF entry in instance network info cache for port 542026c4-106b-4023-8944-0947d2ba1fb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.268 2 DEBUG nova.network.neutron [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updating instance_info_cache with network_info: [{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.280 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.281 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.281 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] No VIF found with MAC fa:16:3e:1a:32:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.282 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Using config drive#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.314 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.322 2 DEBUG oslo_concurrency.lockutils [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.597 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Creating config drive at /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.602 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8goaylod execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.764 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8goaylod" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.806 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:37:50 np0005466030 nova_compute[230518]: 2025-10-02 12:37:50.812 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config a1979f95-4814-4098-8baa-6c4497f20612_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.090 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.090 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.134 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config a1979f95-4814-4098-8baa-6c4497f20612_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.136 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deleting local config drive /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config because it was imported into RBD.#033[00m
Oct  2 08:37:51 np0005466030 kernel: tap542026c4-10: entered promiscuous mode
Oct  2 08:37:51 np0005466030 NetworkManager[44960]: <info>  [1759408671.2346] manager: (tap542026c4-10): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:51Z|00418|binding|INFO|Claiming lport 542026c4-106b-4023-8944-0947d2ba1fb9 for this chassis.
Oct  2 08:37:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:51Z|00419|binding|INFO|542026c4-106b-4023-8944-0947d2ba1fb9: Claiming fa:16:3e:1a:32:6b 10.100.0.14
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.284 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:32:6b 10.100.0.14'], port_security=['fa:16:3e:1a:32:6b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a1979f95-4814-4098-8baa-6c4497f20612', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4179ec2dfcf6411faefd3d7d7e6356d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bba92fb6-f7b6-4607-89fe-9a7eb0e98f6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72da5d14-3a62-46d0-8ef9-a36e497c62b9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=542026c4-106b-4023-8944-0947d2ba1fb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.286 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 542026c4-106b-4023-8944-0947d2ba1fb9 in datapath 34a16246-d3b9-42cb-92f1-19ae3ccb345b bound to our chassis#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.288 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34a16246-d3b9-42cb-92f1-19ae3ccb345b#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a925734c-8b4f-4900-a282-c792751dea2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.306 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34a16246-d1 in ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.309 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34a16246-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.309 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2669279-e1f2-42cb-bad5-80db64552503]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.312 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2904b415-3aa5-4e5c-a850-3bf1a5b5044d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 systemd-udevd[269036]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:37:51 np0005466030 systemd-machined[188247]: New machine qemu-49-instance-00000064.
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.329 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3c5d58-41db-4fc0-b5ab-a78c1e3e0d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 NetworkManager[44960]: <info>  [1759408671.3335] device (tap542026c4-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:37:51 np0005466030 NetworkManager[44960]: <info>  [1759408671.3346] device (tap542026c4-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:37:51 np0005466030 systemd[1]: Started Virtual Machine qemu-49-instance-00000064.
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:51Z|00420|binding|INFO|Setting lport 542026c4-106b-4023-8944-0947d2ba1fb9 ovn-installed in OVS
Oct  2 08:37:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:51Z|00421|binding|INFO|Setting lport 542026c4-106b-4023-8944-0947d2ba1fb9 up in Southbound
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.356 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[676d7b49-7557-4714-bcee-d8cef854d51f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.404 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[78e83d6a-a4f4-438f-84ae-94892dc1f2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.415 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[096d8dcd-3d05-429e-bbfc-09d7cd83bcfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 NetworkManager[44960]: <info>  [1759408671.4178] manager: (tap34a16246-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.463 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4a41862b-aa67-493b-bd00-b6371cd7ca0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.467 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f70ee5-a02e-48c3-bea2-b559059f66a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 NetworkManager[44960]: <info>  [1759408671.4943] device (tap34a16246-d0): carrier: link connected
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.500 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2b380c-0f4e-4e02-bb12-7b0e3f6166f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9400e5ff-4fdd-47b7-b4c9-1991a50d02b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34a16246-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:ec:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653505, 'reachable_time': 38834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269069, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.537 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[70123a10-a665-46b8-a75e-01622691071f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:ec21'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653505, 'tstamp': 653505}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269070, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.556 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb3684f-a6a7-408d-b27e-8d859cfdbfe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34a16246-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:ec:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653505, 'reachable_time': 38834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269071, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.592 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79e966f9-f71e-47d4-bf51-75dde1ea11bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.634 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.671 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ecd28d-16d4-450d-a256-34b382b47a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34a16246-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.675 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34a16246-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:51 np0005466030 kernel: tap34a16246-d0: entered promiscuous mode
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005466030 NetworkManager[44960]: <info>  [1759408671.6791] manager: (tap34a16246-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.685 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34a16246-d0, col_values=(('external_ids', {'iface-id': '6dd3f30e-6e40-4db9-bff9-cbf55578f3e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:51Z|00422|binding|INFO|Releasing lport 6dd3f30e-6e40-4db9-bff9-cbf55578f3e5 from this chassis (sb_readonly=0)
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.689 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34a16246-d3b9-42cb-92f1-19ae3ccb345b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34a16246-d3b9-42cb-92f1-19ae3ccb345b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.690 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[629de4ad-2d0d-4693-9eab-a326bfbce024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.691 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-34a16246-d3b9-42cb-92f1-19ae3ccb345b
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/34a16246-d3b9-42cb-92f1-19ae3ccb345b.pid.haproxy
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 34a16246-d3b9-42cb-92f1-19ae3ccb345b
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:37:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.693 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'env', 'PROCESS_TAG=haproxy-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34a16246-d3b9-42cb-92f1-19ae3ccb345b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.707 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.707 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.880 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.881 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4457MB free_disk=20.888534545898438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.978 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a1979f95-4814-4098-8baa-6c4497f20612 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.979 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:37:51 np0005466030 nova_compute[230518]: 2025-10-02 12:37:51.979 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.041 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:37:52 np0005466030 podman[269105]: 2025-10-02 12:37:52.085367635 +0000 UTC m=+0.029894011 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:37:52 np0005466030 podman[269105]: 2025-10-02 12:37:52.227411404 +0000 UTC m=+0.171937760 container create 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:37:52 np0005466030 systemd[1]: Started libpod-conmon-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4.scope.
Oct  2 08:37:52 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:37:52 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612b6bf1b23791fefeec94c60d80992f1e0c869f67aefa0e1f7a3f215a5e1cce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:37:52 np0005466030 podman[269105]: 2025-10-02 12:37:52.337354043 +0000 UTC m=+0.281880399 container init 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 08:37:52 np0005466030 podman[269105]: 2025-10-02 12:37:52.344908111 +0000 UTC m=+0.289434467 container start 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 08:37:52 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : New worker (269188) forked
Oct  2 08:37:52 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : Loading success.
Oct  2 08:37:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:37:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3572908026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.536 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.543 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.574 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.598 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.599 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.841 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408672.8409817, a1979f95-4814-4098-8baa-6c4497f20612 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.842 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Started (Lifecycle Event)#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.866 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.870 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408672.8426058, a1979f95-4814-4098-8baa-6c4497f20612 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.870 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.890 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.895 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:52 np0005466030 nova_compute[230518]: 2025-10-02 12:37:52.913 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:37:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:37:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:53.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:53 np0005466030 nova_compute[230518]: 2025-10-02 12:37:53.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:53 np0005466030 nova_compute[230518]: 2025-10-02 12:37:53.594 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.582 2 DEBUG nova.compute.manager [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG oslo_concurrency.lockutils [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG oslo_concurrency.lockutils [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG oslo_concurrency.lockutils [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG nova.compute.manager [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Processing event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.584 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.588 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408674.5881617, a1979f95-4814-4098-8baa-6c4497f20612 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.588 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.590 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.593 2 INFO nova.virt.libvirt.driver [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance spawned successfully.#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.593 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.613 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.618 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.623 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.623 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.624 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.624 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.624 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.625 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.655 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.693 2 INFO nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 9.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.694 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.842 2 INFO nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 10.41 seconds to build instance.#033[00m
Oct  2 08:37:54 np0005466030 nova_compute[230518]: 2025-10-02 12:37:54.874 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:37:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:55.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:37:55 np0005466030 nova_compute[230518]: 2025-10-02 12:37:55.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.680 2 DEBUG nova.compute.manager [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.681 2 DEBUG oslo_concurrency.lockutils [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.681 2 DEBUG oslo_concurrency.lockutils [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.681 2 DEBUG oslo_concurrency.lockutils [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.682 2 DEBUG nova.compute.manager [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] No waiting events found dispatching network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:56 np0005466030 nova_compute[230518]: 2025-10-02 12:37:56.683 2 WARNING nova.compute.manager [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received unexpected event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:37:56 np0005466030 podman[269200]: 2025-10-02 12:37:56.821380808 +0000 UTC m=+0.067560116 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:37:56 np0005466030 podman[269199]: 2025-10-02 12:37:56.8951619 +0000 UTC m=+0.139721957 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:37:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:57.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.745 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.746 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.747 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.748 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.748 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.749 2 INFO nova.compute.manager [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Terminating instance#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.751 2 DEBUG nova.compute.manager [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:37:58 np0005466030 kernel: tap542026c4-10 (unregistering): left promiscuous mode
Oct  2 08:37:58 np0005466030 NetworkManager[44960]: <info>  [1759408678.9193] device (tap542026c4-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:37:58 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:58Z|00423|binding|INFO|Releasing lport 542026c4-106b-4023-8944-0947d2ba1fb9 from this chassis (sb_readonly=0)
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:58 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:58Z|00424|binding|INFO|Setting lport 542026c4-106b-4023-8944-0947d2ba1fb9 down in Southbound
Oct  2 08:37:58 np0005466030 ovn_controller[129257]: 2025-10-02T12:37:58Z|00425|binding|INFO|Removing iface tap542026c4-10 ovn-installed in OVS
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.941 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:32:6b 10.100.0.14'], port_security=['fa:16:3e:1a:32:6b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a1979f95-4814-4098-8baa-6c4497f20612', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4179ec2dfcf6411faefd3d7d7e6356d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bba92fb6-f7b6-4607-89fe-9a7eb0e98f6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72da5d14-3a62-46d0-8ef9-a36e497c62b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=542026c4-106b-4023-8944-0947d2ba1fb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:37:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.944 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 542026c4-106b-4023-8944-0947d2ba1fb9 in datapath 34a16246-d3b9-42cb-92f1-19ae3ccb345b unbound from our chassis#033[00m
Oct  2 08:37:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.947 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34a16246-d3b9-42cb-92f1-19ae3ccb345b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:37:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.949 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5556d2a3-004f-4fbc-92a2-f916c4f14551]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.950 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b namespace which is not needed anymore#033[00m
Oct  2 08:37:58 np0005466030 nova_compute[230518]: 2025-10-02 12:37:58.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:59 np0005466030 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct  2 08:37:59 np0005466030 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000064.scope: Consumed 5.754s CPU time.
Oct  2 08:37:59 np0005466030 systemd-machined[188247]: Machine qemu-49-instance-00000064 terminated.
Oct  2 08:37:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:59.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:37:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:37:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:59.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:37:59 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : haproxy version is 2.8.14-c23fe91
Oct  2 08:37:59 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : path to executable is /usr/sbin/haproxy
Oct  2 08:37:59 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [WARNING]  (269186) : Exiting Master process...
Oct  2 08:37:59 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [WARNING]  (269186) : Exiting Master process...
Oct  2 08:37:59 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [ALERT]    (269186) : Current worker (269188) exited with code 143 (Terminated)
Oct  2 08:37:59 np0005466030 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [WARNING]  (269186) : All workers exited. Exiting... (0)
Oct  2 08:37:59 np0005466030 systemd[1]: libpod-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4.scope: Deactivated successfully.
Oct  2 08:37:59 np0005466030 podman[269265]: 2025-10-02 12:37:59.155062105 +0000 UTC m=+0.056383785 container died 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:37:59 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4-userdata-shm.mount: Deactivated successfully.
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.197 2 INFO nova.virt.libvirt.driver [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance destroyed successfully.#033[00m
Oct  2 08:37:59 np0005466030 systemd[1]: var-lib-containers-storage-overlay-612b6bf1b23791fefeec94c60d80992f1e0c869f67aefa0e1f7a3f215a5e1cce-merged.mount: Deactivated successfully.
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.198 2 DEBUG nova.objects.instance [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lazy-loading 'resources' on Instance uuid a1979f95-4814-4098-8baa-6c4497f20612 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:37:59 np0005466030 podman[269265]: 2025-10-02 12:37:59.211572183 +0000 UTC m=+0.112893873 container cleanup 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.229 2 DEBUG nova.virt.libvirt.vif [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-2144240005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-2144240005',id=100,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4179ec2dfcf6411faefd3d7d7e6356d0',ramdisk_id='',reservation_id='r-nntdzlwu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-416799390',owner_user_name='tempest-ServerTagsTestJSON-416799390-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:37:54Z,user_data=None,user_id='e28e4c343f46426788534c9108c8a7a8',uuid=a1979f95-4814-4098-8baa-6c4497f20612,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.230 2 DEBUG nova.network.os_vif_util [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converting VIF {"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.232 2 DEBUG nova.network.os_vif_util [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.232 2 DEBUG os_vif [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap542026c4-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:59 np0005466030 systemd[1]: libpod-conmon-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4.scope: Deactivated successfully.
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.242 2 INFO os_vif [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10')#033[00m
Oct  2 08:37:59 np0005466030 podman[269304]: 2025-10-02 12:37:59.301569365 +0000 UTC m=+0.057164330 container remove 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.310 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c87358-0693-4c6a-92cd-16681e89ee49]: (4, ('Thu Oct  2 12:37:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b (88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4)\n88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4\nThu Oct  2 12:37:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b (88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4)\n88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.312 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea0cbd6-8bbc-4b5a-8a5a-0e3796d9f479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.314 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34a16246-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:59 np0005466030 kernel: tap34a16246-d0: left promiscuous mode
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.347 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ce513b-37f6-428e-8699-167fbd65c26b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.373 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[108b7376-c0b7-4e40-a956-7f821b84c981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.375 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[064fe824-3f8e-4c97-972a-c817aa5706b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.394 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5df94513-9a6f-42fd-93d5-508215bb4f9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653495, 'reachable_time': 16772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269340, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.396 2 DEBUG nova.compute.manager [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-unplugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.398 2 DEBUG oslo_concurrency.lockutils [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.398 2 DEBUG oslo_concurrency.lockutils [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.399 2 DEBUG oslo_concurrency.lockutils [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.400 2 DEBUG nova.compute.manager [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] No waiting events found dispatching network-vif-unplugged-542026c4-106b-4023-8944-0947d2ba1fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:37:59 np0005466030 nova_compute[230518]: 2025-10-02 12:37:59.400 2 DEBUG nova.compute.manager [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-unplugged-542026c4-106b-4023-8944-0947d2ba1fb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.400 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:37:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.401 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9b3f31-7b36-4865-86ca-ee164c2ec3e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:37:59 np0005466030 systemd[1]: run-netns-ovnmeta\x2d34a16246\x2dd3b9\x2d42cb\x2d92f1\x2d19ae3ccb345b.mount: Deactivated successfully.
Oct  2 08:37:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:00 np0005466030 nova_compute[230518]: 2025-10-02 12:38:00.373 2 INFO nova.virt.libvirt.driver [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deleting instance files /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612_del#033[00m
Oct  2 08:38:00 np0005466030 nova_compute[230518]: 2025-10-02 12:38:00.375 2 INFO nova.virt.libvirt.driver [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deletion of /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612_del complete#033[00m
Oct  2 08:38:00 np0005466030 nova_compute[230518]: 2025-10-02 12:38:00.434 2 INFO nova.compute.manager [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 1.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:38:00 np0005466030 nova_compute[230518]: 2025-10-02 12:38:00.435 2 DEBUG oslo.service.loopingcall [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:38:00 np0005466030 nova_compute[230518]: 2025-10-02 12:38:00.435 2 DEBUG nova.compute.manager [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:38:00 np0005466030 nova_compute[230518]: 2025-10-02 12:38:00.436 2 DEBUG nova.network.neutron [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:38:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:01.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:01 np0005466030 nova_compute[230518]: 2025-10-02 12:38:01.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:01 np0005466030 nova_compute[230518]: 2025-10-02 12:38:01.488 2 DEBUG nova.compute.manager [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:01 np0005466030 nova_compute[230518]: 2025-10-02 12:38:01.489 2 DEBUG oslo_concurrency.lockutils [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:01 np0005466030 nova_compute[230518]: 2025-10-02 12:38:01.489 2 DEBUG oslo_concurrency.lockutils [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:01 np0005466030 nova_compute[230518]: 2025-10-02 12:38:01.490 2 DEBUG oslo_concurrency.lockutils [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:01 np0005466030 nova_compute[230518]: 2025-10-02 12:38:01.490 2 DEBUG nova.compute.manager [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] No waiting events found dispatching network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:38:01 np0005466030 nova_compute[230518]: 2025-10-02 12:38:01.491 2 WARNING nova.compute.manager [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received unexpected event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.284 2 DEBUG nova.network.neutron [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.305 2 INFO nova.compute.manager [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 1.87 seconds to deallocate network for instance.#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.359 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.360 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.413 2 DEBUG oslo_concurrency.processutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2909892755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.923 2 DEBUG oslo_concurrency.processutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.931 2 DEBUG nova.compute.provider_tree [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.952 2 DEBUG nova.scheduler.client.report [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:02 np0005466030 nova_compute[230518]: 2025-10-02 12:38:02.988 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:03.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:03 np0005466030 nova_compute[230518]: 2025-10-02 12:38:03.032 2 INFO nova.scheduler.client.report [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Deleted allocations for instance a1979f95-4814-4098-8baa-6c4497f20612#033[00m
Oct  2 08:38:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:03 np0005466030 nova_compute[230518]: 2025-10-02 12:38:03.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:03 np0005466030 nova_compute[230518]: 2025-10-02 12:38:03.125 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:03 np0005466030 nova_compute[230518]: 2025-10-02 12:38:03.723 2 DEBUG nova.compute.manager [req-50085794-0949-449b-9333-da4e745de1f6 req-b46e4fcd-0804-4eda-813b-a35176a8fe23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-deleted-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:38:04 np0005466030 nova_compute[230518]: 2025-10-02 12:38:04.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:05.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:05.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:06 np0005466030 nova_compute[230518]: 2025-10-02 12:38:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:06 np0005466030 nova_compute[230518]: 2025-10-02 12:38:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:38:06 np0005466030 nova_compute[230518]: 2025-10-02 12:38:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:38:06 np0005466030 nova_compute[230518]: 2025-10-02 12:38:06.078 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:38:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:38:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:38:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:07.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:07 np0005466030 nova_compute[230518]: 2025-10-02 12:38:07.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:08 np0005466030 nova_compute[230518]: 2025-10-02 12:38:08.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:08 np0005466030 podman[269364]: 2025-10-02 12:38:08.809341218 +0000 UTC m=+0.065425889 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:38:08 np0005466030 podman[269365]: 2025-10-02 12:38:08.81546287 +0000 UTC m=+0.069698114 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:38:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:09.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:09 np0005466030 nova_compute[230518]: 2025-10-02 12:38:09.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Oct  2 08:38:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:11.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Oct  2 08:38:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Oct  2 08:38:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:13.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:13 np0005466030 nova_compute[230518]: 2025-10-02 12:38:13.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005466030 nova_compute[230518]: 2025-10-02 12:38:14.193 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408679.1927392, a1979f95-4814-4098-8baa-6c4497f20612 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:14 np0005466030 nova_compute[230518]: 2025-10-02 12:38:14.193 2 INFO nova.compute.manager [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:38:14 np0005466030 nova_compute[230518]: 2025-10-02 12:38:14.222 2 DEBUG nova.compute.manager [None req-aa62d783-04a0-4bc6-ad7a-156a188e04f2 - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:14 np0005466030 nova_compute[230518]: 2025-10-02 12:38:14.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:15.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:17.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.355 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "7cd53cbf-91c2-4750-a4c2-551e50950035" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.355 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.370 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.438 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.438 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.445 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.446 2 INFO nova.compute.claims [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:38:18 np0005466030 nova_compute[230518]: 2025-10-02 12:38:18.538 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:18 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  2 08:38:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:19.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/666265248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.082 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.090 2 DEBUG nova.compute.provider_tree [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.254 2 DEBUG nova.scheduler.client.report [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.297 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.298 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.423 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.449 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.485 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:38:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.602 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.604 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.604 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating image(s)#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.638 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.675 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.706 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.711 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.786 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.787 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.788 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.788 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.813 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:19 np0005466030 nova_compute[230518]: 2025-10-02 12:38:19.817 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:20 np0005466030 systemd[1]: Starting dnf makecache...
Oct  2 08:38:21 np0005466030 dnf[269515]: Metadata cache refreshed recently.
Oct  2 08:38:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:21 np0005466030 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  2 08:38:21 np0005466030 systemd[1]: Finished dnf makecache.
Oct  2 08:38:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:38:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:21.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:38:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Oct  2 08:38:21 np0005466030 nova_compute[230518]: 2025-10-02 12:38:21.927 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.029 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] resizing rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.683 2 DEBUG nova.objects.instance [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'migration_context' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.698 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.698 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Ensure instance console log exists: /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.698 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.699 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.699 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.700 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.706 2 WARNING nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.710 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.710 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.713 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.713 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.714 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.714 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:38:22 np0005466030 nova_compute[230518]: 2025-10-02 12:38:22.719 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:23.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2278695610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.231 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.261 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.267 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1185421420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.716 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.718 2 DEBUG nova.objects.instance [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.734 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <uuid>7cd53cbf-91c2-4750-a4c2-551e50950035</uuid>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <name>instance-00000067</name>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerShowV254Test-server-609721834</nova:name>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:38:22</nova:creationTime>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <nova:user uuid="0b4b918d10704ca5852d80098d253220">tempest-ServerShowV254Test-555313685-project-member</nova:user>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <nova:project uuid="99ed62753466455f8b5795e12d35034e">tempest-ServerShowV254Test-555313685</nova:project>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <entry name="serial">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <entry name="uuid">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log" append="off"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:38:23 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:38:23 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:38:23 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:38:23 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.829 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.830 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.831 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Using config drive#033[00m
Oct  2 08:38:23 np0005466030 nova_compute[230518]: 2025-10-02 12:38:23.856 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:24 np0005466030 nova_compute[230518]: 2025-10-02 12:38:24.072 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating config drive at /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config#033[00m
Oct  2 08:38:24 np0005466030 nova_compute[230518]: 2025-10-02 12:38:24.078 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmneraay0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:24 np0005466030 nova_compute[230518]: 2025-10-02 12:38:24.212 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmneraay0" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:24 np0005466030 nova_compute[230518]: 2025-10-02 12:38:24.247 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:24 np0005466030 nova_compute[230518]: 2025-10-02 12:38:24.253 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:24 np0005466030 nova_compute[230518]: 2025-10-02 12:38:24.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:25.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:25 np0005466030 nova_compute[230518]: 2025-10-02 12:38:25.130 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:25 np0005466030 nova_compute[230518]: 2025-10-02 12:38:25.131 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting local config drive /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config because it was imported into RBD.#033[00m
Oct  2 08:38:25 np0005466030 systemd-machined[188247]: New machine qemu-50-instance-00000067.
Oct  2 08:38:25 np0005466030 systemd[1]: Started Virtual Machine qemu-50-instance-00000067.
Oct  2 08:38:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:38:25.934 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:38:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:38:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.418 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408706.4179187, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.418 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.422 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.422 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.425 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance spawned successfully.#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.425 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.445 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.450 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.454 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.454 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.455 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.455 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.455 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.456 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.466 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.467 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408706.4213724, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.467 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Started (Lifecycle Event)#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.485 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.489 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.512 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.516 2 INFO nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 6.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.516 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.560 2 INFO nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 8.15 seconds to build instance.#033[00m
Oct  2 08:38:26 np0005466030 nova_compute[230518]: 2025-10-02 12:38:26.574 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:27 np0005466030 podman[269767]: 2025-10-02 12:38:27.848542732 +0000 UTC m=+0.081725652 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:38:27 np0005466030 podman[269766]: 2025-10-02 12:38:27.870327018 +0000 UTC m=+0.114514214 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:38:28 np0005466030 nova_compute[230518]: 2025-10-02 12:38:28.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:29.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.059 2 INFO nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Rebuilding instance#033[00m
Oct  2 08:38:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:29.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.371 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.392 2 DEBUG nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.441 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'pci_requests' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.455 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.470 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'resources' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.482 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'migration_context' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.498 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:38:29 np0005466030 nova_compute[230518]: 2025-10-02 12:38:29.503 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:38:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:31.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:33.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:33.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:33 np0005466030 nova_compute[230518]: 2025-10-02 12:38:33.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:34 np0005466030 nova_compute[230518]: 2025-10-02 12:38:34.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:35.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:35.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:37.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:38 np0005466030 nova_compute[230518]: 2025-10-02 12:38:38.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:39.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:39 np0005466030 nova_compute[230518]: 2025-10-02 12:38:39.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:39 np0005466030 nova_compute[230518]: 2025-10-02 12:38:39.558 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:38:39 np0005466030 podman[269812]: 2025-10-02 12:38:39.846038647 +0000 UTC m=+0.078118979 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:38:39 np0005466030 podman[269813]: 2025-10-02 12:38:39.853181782 +0000 UTC m=+0.088052681 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct  2 08:38:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:42 np0005466030 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct  2 08:38:42 np0005466030 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000067.scope: Consumed 14.175s CPU time.
Oct  2 08:38:42 np0005466030 systemd-machined[188247]: Machine qemu-50-instance-00000067 terminated.
Oct  2 08:38:42 np0005466030 nova_compute[230518]: 2025-10-02 12:38:42.572 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance shutdown successfully after 13 seconds.#033[00m
Oct  2 08:38:42 np0005466030 nova_compute[230518]: 2025-10-02 12:38:42.579 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance destroyed successfully.#033[00m
Oct  2 08:38:42 np0005466030 nova_compute[230518]: 2025-10-02 12:38:42.584 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance destroyed successfully.#033[00m
Oct  2 08:38:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.080 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting instance files /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.081 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deletion of /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del complete#033[00m
Oct  2 08:38:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:43.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:38:43.278 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:38:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:38:43.279 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.291 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.291 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating image(s)#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.317 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.342 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.366 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.369 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.436 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.437 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.438 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.438 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.466 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.470 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.782 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.849 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] resizing rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.988 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.989 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Ensure instance console log exists: /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.990 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.990 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.991 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.992 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:38:43 np0005466030 nova_compute[230518]: 2025-10-02 12:38:43.997 2 WARNING nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.005 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.006 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.013 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.014 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.015 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.015 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.016 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.016 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.016 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.017 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.018 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.019 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.019 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.019 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.020 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.020 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.020 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.045 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3017170306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.505 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.539 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.546 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:38:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1256335225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.980 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:44 np0005466030 nova_compute[230518]: 2025-10-02 12:38:44.984 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <uuid>7cd53cbf-91c2-4750-a4c2-551e50950035</uuid>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <name>instance-00000067</name>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerShowV254Test-server-609721834</nova:name>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:38:43</nova:creationTime>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <nova:user uuid="0b4b918d10704ca5852d80098d253220">tempest-ServerShowV254Test-555313685-project-member</nova:user>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <nova:project uuid="99ed62753466455f8b5795e12d35034e">tempest-ServerShowV254Test-555313685</nova:project>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <entry name="serial">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <entry name="uuid">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log" append="off"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:38:44 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:38:44 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:38:44 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:38:44 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.067 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.068 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.069 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Using config drive#033[00m
Oct  2 08:38:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:45.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.120 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.145 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.376 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating config drive at /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.381 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2n5ba1y7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.514 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2n5ba1y7" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.559 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.563 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.949 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:45 np0005466030 nova_compute[230518]: 2025-10-02 12:38:45.950 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting local config drive /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config because it was imported into RBD.#033[00m
Oct  2 08:38:46 np0005466030 systemd-machined[188247]: New machine qemu-51-instance-00000067.
Oct  2 08:38:46 np0005466030 systemd[1]: Started Virtual Machine qemu-51-instance-00000067.
Oct  2 08:38:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:47.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.187 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 7cd53cbf-91c2-4750-a4c2-551e50950035 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.188 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408727.1864934, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.189 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.192 2 DEBUG nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.192 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.196 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance spawned successfully.#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.196 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.211 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.214 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.257 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.258 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408727.1870239, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.258 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Started (Lifecycle Event)#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.263 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.263 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.264 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.264 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.265 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.265 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.293 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.296 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.332 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.350 2 DEBUG nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.412 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.412 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.413 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:38:47 np0005466030 nova_compute[230518]: 2025-10-02 12:38:47.482 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:48 np0005466030 nova_compute[230518]: 2025-10-02 12:38:48.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:38:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:38:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:49.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.401 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "7cd53cbf-91c2-4750-a4c2-551e50950035" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.401 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.402 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "7cd53cbf-91c2-4750-a4c2-551e50950035-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.402 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.402 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.406 2 INFO nova.compute.manager [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Terminating instance#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.407 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "refresh_cache-7cd53cbf-91c2-4750-a4c2-551e50950035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.407 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquired lock "refresh_cache-7cd53cbf-91c2-4750-a4c2-551e50950035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.407 2 DEBUG nova.network.neutron [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:38:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:49 np0005466030 nova_compute[230518]: 2025-10-02 12:38:49.693 2 DEBUG nova.network.neutron [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:38:50 np0005466030 nova_compute[230518]: 2025-10-02 12:38:50.143 2 DEBUG nova.network.neutron [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:38:50 np0005466030 nova_compute[230518]: 2025-10-02 12:38:50.159 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Releasing lock "refresh_cache-7cd53cbf-91c2-4750-a4c2-551e50950035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:38:50 np0005466030 nova_compute[230518]: 2025-10-02 12:38:50.161 2 DEBUG nova.compute.manager [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:38:50 np0005466030 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct  2 08:38:50 np0005466030 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000067.scope: Consumed 3.962s CPU time.
Oct  2 08:38:50 np0005466030 systemd-machined[188247]: Machine qemu-51-instance-00000067 terminated.
Oct  2 08:38:50 np0005466030 nova_compute[230518]: 2025-10-02 12:38:50.792 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance destroyed successfully.#033[00m
Oct  2 08:38:50 np0005466030 nova_compute[230518]: 2025-10-02 12:38:50.794 2 DEBUG nova.objects.instance [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'resources' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:38:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:51.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.093 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:53.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:38:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:38:53.282 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3361022258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.875 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.782s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.971 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:53 np0005466030 nova_compute[230518]: 2025-10-02 12:38:53.972 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.172 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.173 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4443MB free_disk=20.876331329345703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.173 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.174 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.297 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7cd53cbf-91c2-4750-a4c2-551e50950035 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.298 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.298 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:54 np0005466030 nova_compute[230518]: 2025-10-02 12:38:54.407 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:38:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:38:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:38:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:55.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:38:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:38:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1239224870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:38:56 np0005466030 nova_compute[230518]: 2025-10-02 12:38:56.508 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:38:56 np0005466030 nova_compute[230518]: 2025-10-02 12:38:56.517 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:38:56 np0005466030 nova_compute[230518]: 2025-10-02 12:38:56.539 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:38:56 np0005466030 nova_compute[230518]: 2025-10-02 12:38:56.597 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:38:56 np0005466030 nova_compute[230518]: 2025-10-02 12:38:56.598 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:38:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:57.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:57 np0005466030 nova_compute[230518]: 2025-10-02 12:38:57.592 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:57 np0005466030 nova_compute[230518]: 2025-10-02 12:38:57.593 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:58 np0005466030 nova_compute[230518]: 2025-10-02 12:38:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:58 np0005466030 nova_compute[230518]: 2025-10-02 12:38:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:38:58 np0005466030 podman[270417]: 2025-10-02 12:38:58.867874425 +0000 UTC m=+0.098720687 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct  2 08:38:58 np0005466030 podman[270416]: 2025-10-02 12:38:58.92235551 +0000 UTC m=+0.152618563 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:38:59 np0005466030 nova_compute[230518]: 2025-10-02 12:38:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:59 np0005466030 nova_compute[230518]: 2025-10-02 12:38:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:59 np0005466030 nova_compute[230518]: 2025-10-02 12:38:59.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:38:59 np0005466030 nova_compute[230518]: 2025-10-02 12:38:59.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:38:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:38:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:59.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:38:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:38:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:38:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:59.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:38:59 np0005466030 nova_compute[230518]: 2025-10-02 12:38:59.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:01.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:01.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:03 np0005466030 nova_compute[230518]: 2025-10-02 12:39:03.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:03.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:03.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:03 np0005466030 nova_compute[230518]: 2025-10-02 12:39:03.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:04 np0005466030 nova_compute[230518]: 2025-10-02 12:39:04.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:39:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:39:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:05.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:05.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:05 np0005466030 nova_compute[230518]: 2025-10-02 12:39:05.790 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408730.7884607, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:05 np0005466030 nova_compute[230518]: 2025-10-02 12:39:05.790 2 INFO nova.compute.manager [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:39:05 np0005466030 nova_compute[230518]: 2025-10-02 12:39:05.826 2 DEBUG nova.compute.manager [None req-364567b6-30f3-47d1-9435-0f78c2132d79 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:05 np0005466030 nova_compute[230518]: 2025-10-02 12:39:05.830 2 DEBUG nova.compute.manager [None req-364567b6-30f3-47d1-9435-0f78c2132d79 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:05 np0005466030 nova_compute[230518]: 2025-10-02 12:39:05.850 2 INFO nova.compute.manager [None req-364567b6-30f3-47d1-9435-0f78c2132d79 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Oct  2 08:39:06 np0005466030 nova_compute[230518]: 2025-10-02 12:39:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:06 np0005466030 nova_compute[230518]: 2025-10-02 12:39:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:39:06 np0005466030 nova_compute[230518]: 2025-10-02 12:39:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:39:06 np0005466030 nova_compute[230518]: 2025-10-02 12:39:06.152 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:39:06 np0005466030 nova_compute[230518]: 2025-10-02 12:39:06.152 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:39:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:07.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:07.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.319 2 INFO nova.virt.libvirt.driver [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting instance files /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.320 2 INFO nova.virt.libvirt.driver [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deletion of /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del complete#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.386 2 INFO nova.compute.manager [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 18.22 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.386 2 DEBUG oslo.service.loopingcall [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.387 2 DEBUG nova.compute.manager [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.387 2 DEBUG nova.network.neutron [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.661 2 DEBUG nova.network.neutron [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.687 2 DEBUG nova.network.neutron [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.718 2 INFO nova.compute.manager [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 0.33 seconds to deallocate network for instance.#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.776 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.777 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:08 np0005466030 nova_compute[230518]: 2025-10-02 12:39:08.848 2 DEBUG oslo_concurrency.processutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:09.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3862223501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:09 np0005466030 nova_compute[230518]: 2025-10-02 12:39:09.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:09 np0005466030 nova_compute[230518]: 2025-10-02 12:39:09.399 2 DEBUG oslo_concurrency.processutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:09 np0005466030 nova_compute[230518]: 2025-10-02 12:39:09.406 2 DEBUG nova.compute.provider_tree [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:39:09 np0005466030 nova_compute[230518]: 2025-10-02 12:39:09.431 2 DEBUG nova.scheduler.client.report [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:39:09 np0005466030 nova_compute[230518]: 2025-10-02 12:39:09.479 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:09 np0005466030 nova_compute[230518]: 2025-10-02 12:39:09.572 2 INFO nova.scheduler.client.report [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Deleted allocations for instance 7cd53cbf-91c2-4750-a4c2-551e50950035#033[00m
Oct  2 08:39:09 np0005466030 nova_compute[230518]: 2025-10-02 12:39:09.706 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 20.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:10 np0005466030 podman[270529]: 2025-10-02 12:39:10.819028602 +0000 UTC m=+0.061529177 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:39:10 np0005466030 podman[270530]: 2025-10-02 12:39:10.84060381 +0000 UTC m=+0.075576609 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:39:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:11.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:11.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:13.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:13 np0005466030 nova_compute[230518]: 2025-10-02 12:39:13.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:14 np0005466030 nova_compute[230518]: 2025-10-02 12:39:14.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.066 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.066 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.098 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:39:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.204 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.205 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.214 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.214 2 INFO nova.compute.claims [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.441 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1379615158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.930 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:15 np0005466030 nova_compute[230518]: 2025-10-02 12:39:15.939 2 DEBUG nova.compute.provider_tree [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.027 2 DEBUG nova.scheduler.client.report [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.129 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.130 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:39:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.245 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.246 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.282 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.306 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.542 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.544 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.545 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Creating image(s)#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.595 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.629 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.657 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.661 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.755 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.758 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.759 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.759 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.798 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.805 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:16 np0005466030 nova_compute[230518]: 2025-10-02 12:39:16.933 2 DEBUG nova.policy [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:39:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:17.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.233 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.320 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.639 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Successfully created port: 8d9cc17a-7804-4743-925a-496d9fe78c73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.776 2 DEBUG nova.objects.instance [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.829 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.829 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Ensure instance console log exists: /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.830 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.830 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:18 np0005466030 nova_compute[230518]: 2025-10-02 12:39:18.830 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:19.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:19 np0005466030 nova_compute[230518]: 2025-10-02 12:39:19.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.200 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Successfully updated port: 8d9cc17a-7804-4743-925a-496d9fe78c73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.257 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.257 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.257 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.368 2 DEBUG nova.compute.manager [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.368 2 DEBUG nova.compute.manager [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing instance network info cache due to event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.369 2 DEBUG oslo_concurrency.lockutils [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:20 np0005466030 nova_compute[230518]: 2025-10-02 12:39:20.827 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:39:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:21.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.638 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.756 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.756 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance network_info: |[{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.757 2 DEBUG oslo_concurrency.lockutils [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.757 2 DEBUG nova.network.neutron [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.761 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start _get_guest_xml network_info=[{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.766 2 WARNING nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.771 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.772 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.776 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.777 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.778 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.778 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.778 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.780 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.780 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.780 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.781 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.781 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:39:22 np0005466030 nova_compute[230518]: 2025-10-02 12:39:22.785 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:23.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/280546518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.314 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.449 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.454 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:39:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1127676684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.876 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.877 2 DEBUG nova.virt.libvirt.vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1525238782',display_name='tempest-ServerActionsTestOtherA-server-1525238782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1525238782',id=105,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-uk3eghdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=7621a774-e0bc-4f4f-b900-c3608dd6835a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.878 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.879 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.881 2 DEBUG nova.objects.instance [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.909 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <uuid>7621a774-e0bc-4f4f-b900-c3608dd6835a</uuid>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <name>instance-00000069</name>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestOtherA-server-1525238782</nova:name>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:39:22</nova:creationTime>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <nova:port uuid="8d9cc17a-7804-4743-925a-496d9fe78c73">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <entry name="serial">7621a774-e0bc-4f4f-b900-c3608dd6835a</entry>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <entry name="uuid">7621a774-e0bc-4f4f-b900-c3608dd6835a</entry>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7621a774-e0bc-4f4f-b900-c3608dd6835a_disk">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:c4:d9:d3"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <target dev="tap8d9cc17a-78"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/console.log" append="off"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:39:23 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:39:23 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:39:23 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:39:23 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.910 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Preparing to wait for external event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.910 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.910 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.911 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.911 2 DEBUG nova.virt.libvirt.vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1525238782',display_name='tempest-ServerActionsTestOtherA-server-1525238782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1525238782',id=105,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-uk3eghdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=7621a774-e0bc-4f4f-b900-c3608dd6835a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.911 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.912 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.912 2 DEBUG os_vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d9cc17a-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d9cc17a-78, col_values=(('external_ids', {'iface-id': '8d9cc17a-7804-4743-925a-496d9fe78c73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:d9:d3', 'vm-uuid': '7621a774-e0bc-4f4f-b900-c3608dd6835a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:23 np0005466030 NetworkManager[44960]: <info>  [1759408763.9202] manager: (tap8d9cc17a-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:23 np0005466030 nova_compute[230518]: 2025-10-02 12:39:23.926 2 INFO os_vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78')#033[00m
Oct  2 08:39:24 np0005466030 nova_compute[230518]: 2025-10-02 12:39:24.103 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:24 np0005466030 nova_compute[230518]: 2025-10-02 12:39:24.104 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:39:24 np0005466030 nova_compute[230518]: 2025-10-02 12:39:24.104 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:c4:d9:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:39:24 np0005466030 nova_compute[230518]: 2025-10-02 12:39:24.105 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Using config drive#033[00m
Oct  2 08:39:24 np0005466030 nova_compute[230518]: 2025-10-02 12:39:24.141 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Oct  2 08:39:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:25.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:25 np0005466030 nova_compute[230518]: 2025-10-02 12:39:25.449 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Creating config drive at /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config#033[00m
Oct  2 08:39:25 np0005466030 nova_compute[230518]: 2025-10-02 12:39:25.457 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno3jh7up execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:25 np0005466030 nova_compute[230518]: 2025-10-02 12:39:25.592 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno3jh7up" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:25 np0005466030 nova_compute[230518]: 2025-10-02 12:39:25.627 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:25 np0005466030 nova_compute[230518]: 2025-10-02 12:39:25.631 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:25.935 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.132 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.133 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deleting local config drive /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config because it was imported into RBD.#033[00m
Oct  2 08:39:26 np0005466030 kernel: tap8d9cc17a-78: entered promiscuous mode
Oct  2 08:39:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:26Z|00426|binding|INFO|Claiming lport 8d9cc17a-7804-4743-925a-496d9fe78c73 for this chassis.
Oct  2 08:39:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:26Z|00427|binding|INFO|8d9cc17a-7804-4743-925a-496d9fe78c73: Claiming fa:16:3e:c4:d9:d3 10.100.0.14
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:26 np0005466030 NetworkManager[44960]: <info>  [1759408766.1977] manager: (tap8d9cc17a-78): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.202 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:d9:d3 10.100.0.14'], port_security=['fa:16:3e:c4:d9:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7621a774-e0bc-4f4f-b900-c3608dd6835a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d9cc17a-7804-4743-925a-496d9fe78c73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.203 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d9cc17a-7804-4743-925a-496d9fe78c73 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.205 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b#033[00m
Oct  2 08:39:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:26 np0005466030 systemd-udevd[270891]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.219 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6af7ca0d-1a5b-4a05-a5a1-0579858ec11b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.220 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3643647-71 in ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.222 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3643647-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.222 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[660b7088-58ba-40b7-814a-761ff7af4beb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.223 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[58d90aa1-e80b-4b56-8af5-2e4af3ffe7dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 NetworkManager[44960]: <info>  [1759408766.2310] device (tap8d9cc17a-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:39:26 np0005466030 NetworkManager[44960]: <info>  [1759408766.2360] device (tap8d9cc17a-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.234 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[48128c41-f747-4d2d-885e-f0e16d0b6beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 systemd-machined[188247]: New machine qemu-52-instance-00000069.
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8075c969-0153-4189-a0ca-b99b227d2700]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 systemd[1]: Started Virtual Machine qemu-52-instance-00000069.
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.268 2 DEBUG nova.network.neutron [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated VIF entry in instance network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.269 2 DEBUG nova.network.neutron [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:26Z|00428|binding|INFO|Setting lport 8d9cc17a-7804-4743-925a-496d9fe78c73 ovn-installed in OVS
Oct  2 08:39:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:26Z|00429|binding|INFO|Setting lport 8d9cc17a-7804-4743-925a-496d9fe78c73 up in Southbound
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.293 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[06854f83-f69a-47e4-b8f3-ef4c3222676b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 systemd-udevd[270897]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:39:26 np0005466030 NetworkManager[44960]: <info>  [1759408766.2999] manager: (tapf3643647-70): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.299 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[688aeeb2-581d-4d85-9c09-e09ea1010f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.308 2 DEBUG oslo_concurrency.lockutils [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.337 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[937f6336-a6ed-442e-838a-dfe3562248bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.340 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[aa37992d-2c5d-4e46-a836-f386a5f62b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 NetworkManager[44960]: <info>  [1759408766.3640] device (tapf3643647-70): carrier: link connected
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.371 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8b00229b-4d46-4b9d-a92d-69db3a9738a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.388 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[526fc128-71c4-4974-944e-7c5d660eb725]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270926, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.403 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[925e4fd6-91b7-4733-9cc0-333ce2d34dd0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:edfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662992, 'tstamp': 662992}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270927, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.420 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3420af-cd21-4c82-aa3c-ac8d91f2124c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270928, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.448 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5c283142-4144-4b9e-9134-ebfca1e17421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.496 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c89657fe-58e1-4e82-8ea1-a91a9eac4627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.497 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.498 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.498 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:26 np0005466030 kernel: tapf3643647-70: entered promiscuous mode
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.501 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:39:26 np0005466030 NetworkManager[44960]: <info>  [1759408766.5020] manager: (tapf3643647-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Oct  2 08:39:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:26Z|00430|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:26 np0005466030 nova_compute[230518]: 2025-10-02 12:39:26.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.520 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.521 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[30340a16-1bd0-438c-b271-5213519252d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.522 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.523 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'env', 'PROCESS_TAG=haproxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3643647-7cd9-4c43-8aaa-9b0f3160274b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:39:26 np0005466030 podman[270960]: 2025-10-02 12:39:26.88059777 +0000 UTC m=+0.033629799 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:39:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:27.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:27.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:27 np0005466030 podman[270960]: 2025-10-02 12:39:27.218799121 +0000 UTC m=+0.371831150 container create 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:39:27 np0005466030 systemd[1]: Started libpod-conmon-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d.scope.
Oct  2 08:39:27 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:39:27 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ac23ed22d2d8032f0cbc3084da7e2a93ca5dafb47c834f98bbf243d6c598e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.692 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408767.6891396, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.696 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Started (Lifecycle Event)#033[00m
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.731 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.735 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408767.6907604, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.736 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.762 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.765 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:27 np0005466030 podman[270960]: 2025-10-02 12:39:27.786313016 +0000 UTC m=+0.939345075 container init 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:39:27 np0005466030 podman[270960]: 2025-10-02 12:39:27.793843883 +0000 UTC m=+0.946875922 container start 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 08:39:27 np0005466030 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : New worker (271021) forked
Oct  2 08:39:27 np0005466030 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : Loading success.
Oct  2 08:39:27 np0005466030 nova_compute[230518]: 2025-10-02 12:39:27.832 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.471 2 DEBUG nova.compute.manager [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.471 2 DEBUG oslo_concurrency.lockutils [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.471 2 DEBUG oslo_concurrency.lockutils [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.472 2 DEBUG oslo_concurrency.lockutils [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.472 2 DEBUG nova.compute.manager [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Processing event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.473 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.477 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408768.4773347, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.478 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.479 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.485 2 INFO nova.virt.libvirt.driver [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance spawned successfully.#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.486 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.519 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.527 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.533 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.534 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.535 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.537 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.537 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.538 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.573 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.619 2 INFO nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 12.08 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.619 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.704 2 INFO nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 13.54 seconds to build instance.#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.751 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:28 np0005466030 nova_compute[230518]: 2025-10-02 12:39:28.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:29.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:29.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:29 np0005466030 podman[271031]: 2025-10-02 12:39:29.815078659 +0000 UTC m=+0.056538330 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  2 08:39:29 np0005466030 podman[271030]: 2025-10-02 12:39:29.838404293 +0000 UTC m=+0.085804201 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:39:30 np0005466030 nova_compute[230518]: 2025-10-02 12:39:30.644 2 DEBUG nova.compute.manager [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:30 np0005466030 nova_compute[230518]: 2025-10-02 12:39:30.645 2 DEBUG oslo_concurrency.lockutils [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:30 np0005466030 nova_compute[230518]: 2025-10-02 12:39:30.645 2 DEBUG oslo_concurrency.lockutils [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:30 np0005466030 nova_compute[230518]: 2025-10-02 12:39:30.645 2 DEBUG oslo_concurrency.lockutils [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:30 np0005466030 nova_compute[230518]: 2025-10-02 12:39:30.646 2 DEBUG nova.compute.manager [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] No waiting events found dispatching network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:39:30 np0005466030 nova_compute[230518]: 2025-10-02 12:39:30.646 2 WARNING nova.compute.manager [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received unexpected event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:39:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:31.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Oct  2 08:39:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:33.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:33 np0005466030 nova_compute[230518]: 2025-10-02 12:39:33.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:33.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:33 np0005466030 nova_compute[230518]: 2025-10-02 12:39:33.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:33 np0005466030 NetworkManager[44960]: <info>  [1759408773.5966] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct  2 08:39:33 np0005466030 NetworkManager[44960]: <info>  [1759408773.5985] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Oct  2 08:39:33 np0005466030 nova_compute[230518]: 2025-10-02 12:39:33.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:33Z|00431|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:39:33 np0005466030 nova_compute[230518]: 2025-10-02 12:39:33.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:33 np0005466030 nova_compute[230518]: 2025-10-02 12:39:33.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:34 np0005466030 nova_compute[230518]: 2025-10-02 12:39:34.255 2 DEBUG nova.compute.manager [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:39:34 np0005466030 nova_compute[230518]: 2025-10-02 12:39:34.256 2 DEBUG nova.compute.manager [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing instance network info cache due to event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:39:34 np0005466030 nova_compute[230518]: 2025-10-02 12:39:34.256 2 DEBUG oslo_concurrency.lockutils [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:39:34 np0005466030 nova_compute[230518]: 2025-10-02 12:39:34.257 2 DEBUG oslo_concurrency.lockutils [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:39:34 np0005466030 nova_compute[230518]: 2025-10-02 12:39:34.257 2 DEBUG nova.network.neutron [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:39:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:35.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:35.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:36 np0005466030 nova_compute[230518]: 2025-10-02 12:39:36.037 2 DEBUG nova.network.neutron [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated VIF entry in instance network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:39:36 np0005466030 nova_compute[230518]: 2025-10-02 12:39:36.038 2 DEBUG nova.network.neutron [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:39:36 np0005466030 nova_compute[230518]: 2025-10-02 12:39:36.122 2 DEBUG oslo_concurrency.lockutils [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:39:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:37.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:37.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:37 np0005466030 nova_compute[230518]: 2025-10-02 12:39:37.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:38 np0005466030 nova_compute[230518]: 2025-10-02 12:39:38.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:39 np0005466030 nova_compute[230518]: 2025-10-02 12:39:39.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:39.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:41.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:41 np0005466030 podman[271077]: 2025-10-02 12:39:41.802078606 +0000 UTC m=+0.057434199 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 08:39:41 np0005466030 podman[271078]: 2025-10-02 12:39:41.820581607 +0000 UTC m=+0.074051340 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:39:42 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:42Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:d9:d3 10.100.0.14
Oct  2 08:39:42 np0005466030 ovn_controller[129257]: 2025-10-02T12:39:42Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:d9:d3 10.100.0.14
Oct  2 08:39:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:39:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:39:43 np0005466030 nova_compute[230518]: 2025-10-02 12:39:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:43.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:44 np0005466030 nova_compute[230518]: 2025-10-02 12:39:44.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:47.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:47 np0005466030 nova_compute[230518]: 2025-10-02 12:39:47.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:48 np0005466030 nova_compute[230518]: 2025-10-02 12:39:48.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:49 np0005466030 nova_compute[230518]: 2025-10-02 12:39:49.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:49.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:51.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:53 np0005466030 nova_compute[230518]: 2025-10-02 12:39:53.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:53.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.062 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.063 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.174 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.604 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.605 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.613 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.613 2 INFO nova.compute.claims [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:39:54 np0005466030 nova_compute[230518]: 2025-10-02 12:39:54.916 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:55.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.257 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/717183492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.366 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.372 2 DEBUG nova.compute.provider_tree [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.413 2 DEBUG nova.scheduler.client.report [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.488 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.489 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.492 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.492 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.492 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.493 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.745 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.746 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.901 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:39:55 np0005466030 nova_compute[230518]: 2025-10-02 12:39:55.989 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:39:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2398810375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.049 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.149 2 DEBUG nova.policy [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea3659a324824b3991f98e26e33c752f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44e6ad861d934450b2090f40fab255f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.202 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.203 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.228 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.230 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.231 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Creating image(s)#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.260 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.285 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.311 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.314 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.399 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.401 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.401 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.402 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.431 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.434 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.580 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.582 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4274MB free_disk=20.78521728515625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.583 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.583 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 104be830-8fcd-47dd-a2b4-f92b66dcbc80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:39:56 np0005466030 nova_compute[230518]: 2025-10-02 12:39:56.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:39:57 np0005466030 nova_compute[230518]: 2025-10-02 12:39:57.069 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:39:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:39:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:39:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:57.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:57 np0005466030 nova_compute[230518]: 2025-10-02 12:39:57.529 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:39:57 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4133083068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:39:57 np0005466030 nova_compute[230518]: 2025-10-02 12:39:57.634 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:39:57 np0005466030 nova_compute[230518]: 2025-10-02 12:39:57.641 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] resizing rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:39:57 np0005466030 nova_compute[230518]: 2025-10-02 12:39:57.831 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:39:57 np0005466030 nova_compute[230518]: 2025-10-02 12:39:57.979 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.079 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.079 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.238 2 DEBUG nova.objects.instance [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 104be830-8fcd-47dd-a2b4-f92b66dcbc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.324 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.324 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Ensure instance console log exists: /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.325 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.325 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.325 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:39:58 np0005466030 nova_compute[230518]: 2025-10-02 12:39:58.369 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Successfully created port: 06cb1a9b-ada6-4485-bce1-9582e5b82b6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:39:59 np0005466030 nova_compute[230518]: 2025-10-02 12:39:59.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:39:59 np0005466030 nova_compute[230518]: 2025-10-02 12:39:59.079 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:39:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:59.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:39:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:39:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:39:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:59.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.517 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Successfully updated port: 06cb1a9b-ada6-4485-bce1-9582e5b82b6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.615 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.616 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquired lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.616 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.741 2 DEBUG nova.compute.manager [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-changed-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.741 2 DEBUG nova.compute.manager [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Refreshing instance network info cache due to event network-changed-06cb1a9b-ada6-4485-bce1-9582e5b82b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:40:00 np0005466030 nova_compute[230518]: 2025-10-02 12:40:00.742 2 DEBUG oslo_concurrency.lockutils [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:00 np0005466030 podman[271353]: 2025-10-02 12:40:00.835843239 +0000 UTC m=+0.079510301 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:40:00 np0005466030 podman[271352]: 2025-10-02 12:40:00.868167597 +0000 UTC m=+0.114477143 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct  2 08:40:01 np0005466030 nova_compute[230518]: 2025-10-02 12:40:01.050 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:40:01 np0005466030 nova_compute[230518]: 2025-10-02 12:40:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:01 np0005466030 nova_compute[230518]: 2025-10-02 12:40:01.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:40:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:01.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:02.374 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:02 np0005466030 nova_compute[230518]: 2025-10-02 12:40:02.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:02.375 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:40:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.362 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updating instance_info_cache with network_info: [{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.530 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Releasing lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.531 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance network_info: |[{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.531 2 DEBUG oslo_concurrency.lockutils [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.532 2 DEBUG nova.network.neutron [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Refreshing network info cache for port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.538 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start _get_guest_xml network_info=[{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.545 2 WARNING nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.550 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.551 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.555 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.556 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.559 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:40:03 np0005466030 nova_compute[230518]: 2025-10-02 12:40:03.561 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1075071985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.058 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.088 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.093 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:40:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/938358496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.501 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.503 2 DEBUG nova.virt.libvirt.vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1275138439',id=107,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44e6ad861d934450b2090f40fab255f0',ramdisk_id='',reservation_id='r-i33rn6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-730584977',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:56Z,user_data=None,user_id='ea3659a324824b3991f98e26e33c752f',uuid=104be830-8fcd-47dd-a2b4-f92b66dcbc80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.503 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converting VIF {"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.504 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.505 2 DEBUG nova.objects.instance [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 104be830-8fcd-47dd-a2b4-f92b66dcbc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.565 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <uuid>104be830-8fcd-47dd-a2b4-f92b66dcbc80</uuid>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <name>instance-0000006b</name>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1275138439</nova:name>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:40:03</nova:creationTime>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:user uuid="ea3659a324824b3991f98e26e33c752f">tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member</nova:user>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:project uuid="44e6ad861d934450b2090f40fab255f0">tempest-ServersNegativeTestMultiTenantJSON-730584977</nova:project>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <nova:port uuid="06cb1a9b-ada6-4485-bce1-9582e5b82b6f">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <entry name="serial">104be830-8fcd-47dd-a2b4-f92b66dcbc80</entry>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <entry name="uuid">104be830-8fcd-47dd-a2b4-f92b66dcbc80</entry>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:80:d7:1c"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <target dev="tap06cb1a9b-ad"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/console.log" append="off"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:40:04 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:40:04 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:40:04 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:40:04 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.566 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Preparing to wait for external event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.567 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.567 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.567 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.568 2 DEBUG nova.virt.libvirt.vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1275138439',id=107,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44e6ad861d934450b2090f40fab255f0',ramdisk_id='',reservation_id='r-i33rn6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-730584977',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:56Z,user_data=None,user_id='ea3659a324824b3991f98e26e33c752f',uuid=104be830-8fcd-47dd-a2b4-f92b66dcbc80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.568 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converting VIF {"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.569 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.569 2 DEBUG os_vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.574 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06cb1a9b-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.575 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06cb1a9b-ad, col_values=(('external_ids', {'iface-id': '06cb1a9b-ada6-4485-bce1-9582e5b82b6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:d7:1c', 'vm-uuid': '104be830-8fcd-47dd-a2b4-f92b66dcbc80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:04 np0005466030 NetworkManager[44960]: <info>  [1759408804.5777] manager: (tap06cb1a9b-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.584 2 INFO os_vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad')#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.845 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.845 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.846 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] No VIF found with MAC fa:16:3e:80:d7:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.846 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Using config drive#033[00m
Oct  2 08:40:04 np0005466030 nova_compute[230518]: 2025-10-02 12:40:04.893 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:05 np0005466030 nova_compute[230518]: 2025-10-02 12:40:05.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:05.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:05 np0005466030 nova_compute[230518]: 2025-10-02 12:40:05.957 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Creating config drive at /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config#033[00m
Oct  2 08:40:05 np0005466030 nova_compute[230518]: 2025-10-02 12:40:05.961 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps8quwt69 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:40:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:40:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:40:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.095 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps8quwt69" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.244 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.247 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.274 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:40:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.902 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.902 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.903 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.903 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.944 2 DEBUG nova.network.neutron [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updated VIF entry in instance network info cache for port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:40:06 np0005466030 nova_compute[230518]: 2025-10-02 12:40:06.944 2 DEBUG nova.network.neutron [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updating instance_info_cache with network_info: [{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.108 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.108 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deleting local config drive /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config because it was imported into RBD.#033[00m
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.140 2 DEBUG oslo_concurrency.lockutils [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:07 np0005466030 kernel: tap06cb1a9b-ad: entered promiscuous mode
Oct  2 08:40:07 np0005466030 NetworkManager[44960]: <info>  [1759408807.1784] manager: (tap06cb1a9b-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Oct  2 08:40:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:07Z|00432|binding|INFO|Claiming lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f for this chassis.
Oct  2 08:40:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:07Z|00433|binding|INFO|06cb1a9b-ada6-4485-bce1-9582e5b82b6f: Claiming fa:16:3e:80:d7:1c 10.100.0.7
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:07Z|00434|binding|INFO|Setting lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f ovn-installed in OVS
Oct  2 08:40:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:07.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:07Z|00435|binding|INFO|Setting lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f up in Southbound
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.208 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:d7:1c 10.100.0.7'], port_security=['fa:16:3e:80:d7:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '104be830-8fcd-47dd-a2b4-f92b66dcbc80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44e6ad861d934450b2090f40fab255f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26178b5a-3362-47ee-8eb1-d9a8089548c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab327b80-2049-4821-a881-044c34a4c8df, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=06cb1a9b-ada6-4485-bce1-9582e5b82b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.211 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f in datapath 17002dea-5c5e-46a6-892d-d32d33c1f02d bound to our chassis#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.215 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17002dea-5c5e-46a6-892d-d32d33c1f02d#033[00m
Oct  2 08:40:07 np0005466030 systemd-machined[188247]: New machine qemu-53-instance-0000006b.
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.230 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d948edaa-1458-448b-8ebf-5c8f9473b468]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.231 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17002dea-51 in ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:40:07 np0005466030 systemd[1]: Started Virtual Machine qemu-53-instance-0000006b.
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.236 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17002dea-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.236 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d39c58e6-5282-403f-afd1-61df01da6fac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.237 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b808c8-e0f0-4518-888b-09ddaad39c55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 systemd-udevd[271660]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:40:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:07.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:07 np0005466030 NetworkManager[44960]: <info>  [1759408807.2581] device (tap06cb1a9b-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:40:07 np0005466030 NetworkManager[44960]: <info>  [1759408807.2592] device (tap06cb1a9b-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.266 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[565b6bb6-431c-4f95-827b-eb17a50c1629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.300 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcfd085-ee4b-41e5-80d6-0633e850915e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.330 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bdabbcf9-a78d-4e08-b03e-8f363cbf9103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 NetworkManager[44960]: <info>  [1759408807.3389] manager: (tap17002dea-50): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Oct  2 08:40:07 np0005466030 systemd-udevd[271663]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.337 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7dac6f7-661e-4f17-bc18-62e3e07a33f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.375 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[874c6c87-2742-496c-a544-08ce9988baa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.376 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.377 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6f7a32-356b-43d5-bfa4-0552f4eb16ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 NetworkManager[44960]: <info>  [1759408807.4038] device (tap17002dea-50): carrier: link connected
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.408 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b5570c9d-62df-4882-be05-928828a610de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a7bf0e-462f-4d34-8559-2b9fbd329e8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17002dea-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:14:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667096, 'reachable_time': 22697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271692, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.440 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[14da3ba9-2bee-4f1d-90f7-5c1cb0a14a6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:14e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667096, 'tstamp': 667096}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271693, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.457 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2b683b-25a3-4c87-a134-dd2d48dc04d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17002dea-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:14:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667096, 'reachable_time': 22697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271701, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.481 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[213180b5-3eb6-4f35-a8b8-1d082e908826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.521 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1627ffd9-2101-4eca-9121-5a3d27c12847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.522 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17002dea-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.522 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.522 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17002dea-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:07 np0005466030 NetworkManager[44960]: <info>  [1759408807.5250] manager: (tap17002dea-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005466030 kernel: tap17002dea-50: entered promiscuous mode
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.529 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17002dea-50, col_values=(('external_ids', {'iface-id': 'ecf8c253-9ae0-49e5-afdc-690e186df947'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:07Z|00436|binding|INFO|Releasing lport ecf8c253-9ae0-49e5-afdc-690e186df947 from this chassis (sb_readonly=0)
Oct  2 08:40:07 np0005466030 nova_compute[230518]: 2025-10-02 12:40:07.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.545 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17002dea-5c5e-46a6-892d-d32d33c1f02d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17002dea-5c5e-46a6-892d-d32d33c1f02d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bee04259-91b9-42c4-8aaa-0779db113807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.546 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-17002dea-5c5e-46a6-892d-d32d33c1f02d
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/17002dea-5c5e-46a6-892d-d32d33c1f02d.pid.haproxy
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 17002dea-5c5e-46a6-892d-d32d33c1f02d
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:40:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.546 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'env', 'PROCESS_TAG=haproxy-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17002dea-5c5e-46a6-892d-d32d33c1f02d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:40:07 np0005466030 podman[271768]: 2025-10-02 12:40:07.929211669 +0000 UTC m=+0.054200016 container create c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:40:07 np0005466030 systemd[1]: Started libpod-conmon-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope.
Oct  2 08:40:07 np0005466030 podman[271768]: 2025-10-02 12:40:07.899801313 +0000 UTC m=+0.024789700 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:40:08 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:40:08 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca61862b6333cfd3d3887fbf1324742c9b2c5bd3ebde3922936d76af4c5a58c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:40:08 np0005466030 podman[271768]: 2025-10-02 12:40:08.015533555 +0000 UTC m=+0.140521922 container init c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 08:40:08 np0005466030 podman[271768]: 2025-10-02 12:40:08.020289705 +0000 UTC m=+0.145278062 container start c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:40:08 np0005466030 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : New worker (271790) forked
Oct  2 08:40:08 np0005466030 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : Loading success.
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.316 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408808.3157983, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.317 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Started (Lifecycle Event)#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.476 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.481 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408808.3166614, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.481 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.519 2 DEBUG nova.compute.manager [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.519 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.520 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.520 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.520 2 DEBUG nova.compute.manager [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Processing event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.521 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.526 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.528 2 INFO nova.virt.libvirt.driver [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance spawned successfully.#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.529 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.572 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.576 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408808.525901, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.576 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.599 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.599 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.600 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.600 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.600 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.601 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.727 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.732 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:40:08 np0005466030 nova_compute[230518]: 2025-10-02 12:40:08.830 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:40:09 np0005466030 nova_compute[230518]: 2025-10-02 12:40:09.043 2 INFO nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 12.81 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:40:09 np0005466030 nova_compute[230518]: 2025-10-02 12:40:09.044 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:09 np0005466030 nova_compute[230518]: 2025-10-02 12:40:09.236 2 INFO nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 14.67 seconds to build instance.#033[00m
Oct  2 08:40:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:09.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:09 np0005466030 nova_compute[230518]: 2025-10-02 12:40:09.523 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:09 np0005466030 nova_compute[230518]: 2025-10-02 12:40:09.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:10 np0005466030 nova_compute[230518]: 2025-10-02 12:40:10.962 2 DEBUG nova.compute.manager [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:10 np0005466030 nova_compute[230518]: 2025-10-02 12:40:10.962 2 DEBUG oslo_concurrency.lockutils [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:10 np0005466030 nova_compute[230518]: 2025-10-02 12:40:10.963 2 DEBUG oslo_concurrency.lockutils [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:10 np0005466030 nova_compute[230518]: 2025-10-02 12:40:10.963 2 DEBUG oslo_concurrency.lockutils [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:10 np0005466030 nova_compute[230518]: 2025-10-02 12:40:10.963 2 DEBUG nova.compute.manager [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] No waiting events found dispatching network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:40:10 np0005466030 nova_compute[230518]: 2025-10-02 12:40:10.963 2 WARNING nova.compute.manager [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received unexpected event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:40:11 np0005466030 nova_compute[230518]: 2025-10-02 12:40:11.183 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:11.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:11 np0005466030 nova_compute[230518]: 2025-10-02 12:40:11.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:40:11 np0005466030 nova_compute[230518]: 2025-10-02 12:40:11.232 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:40:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:11.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:12 np0005466030 podman[271800]: 2025-10-02 12:40:12.873198935 +0000 UTC m=+0.102970731 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:40:12 np0005466030 podman[271799]: 2025-10-02 12:40:12.871882564 +0000 UTC m=+0.100952168 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:40:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:13.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.434 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.436 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.436 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.437 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.437 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.440 2 INFO nova.compute.manager [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Terminating instance#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.442 2 DEBUG nova.compute.manager [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:40:13 np0005466030 kernel: tap06cb1a9b-ad (unregistering): left promiscuous mode
Oct  2 08:40:13 np0005466030 NetworkManager[44960]: <info>  [1759408813.5259] device (tap06cb1a9b-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:13Z|00437|binding|INFO|Releasing lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f from this chassis (sb_readonly=0)
Oct  2 08:40:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:13Z|00438|binding|INFO|Setting lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f down in Southbound
Oct  2 08:40:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:13Z|00439|binding|INFO|Removing iface tap06cb1a9b-ad ovn-installed in OVS
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466030 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct  2 08:40:13 np0005466030 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006b.scope: Consumed 5.904s CPU time.
Oct  2 08:40:13 np0005466030 systemd-machined[188247]: Machine qemu-53-instance-0000006b terminated.
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.684 2 INFO nova.virt.libvirt.driver [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance destroyed successfully.#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.684 2 DEBUG nova.objects.instance [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lazy-loading 'resources' on Instance uuid 104be830-8fcd-47dd-a2b4-f92b66dcbc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:40:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.690 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:d7:1c 10.100.0.7'], port_security=['fa:16:3e:80:d7:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '104be830-8fcd-47dd-a2b4-f92b66dcbc80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44e6ad861d934450b2090f40fab255f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26178b5a-3362-47ee-8eb1-d9a8089548c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab327b80-2049-4821-a881-044c34a4c8df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=06cb1a9b-ada6-4485-bce1-9582e5b82b6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:40:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.692 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f in datapath 17002dea-5c5e-46a6-892d-d32d33c1f02d unbound from our chassis#033[00m
Oct  2 08:40:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.694 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17002dea-5c5e-46a6-892d-d32d33c1f02d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:40:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.695 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea327e6-313f-49d2-8ae5-c4384c0b57ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.696 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d namespace which is not needed anymore#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.837 2 DEBUG nova.virt.libvirt.vif [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1275138439',id=107,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44e6ad861d934450b2090f40fab255f0',ramdisk_id='',reservation_id='r-i33rn6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-730584977',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:40:09Z,user_data=None,user_id='ea3659a324824b3991f98e26e33c752f',uuid=104be830-8fcd-47dd-a2b4-f92b66dcbc80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.837 2 DEBUG nova.network.os_vif_util [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converting VIF {"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.838 2 DEBUG nova.network.os_vif_util [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.838 2 DEBUG os_vif [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06cb1a9b-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:40:13 np0005466030 nova_compute[230518]: 2025-10-02 12:40:13.846 2 INFO os_vif [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad')#033[00m
Oct  2 08:40:13 np0005466030 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : haproxy version is 2.8.14-c23fe91
Oct  2 08:40:13 np0005466030 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : path to executable is /usr/sbin/haproxy
Oct  2 08:40:13 np0005466030 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [WARNING]  (271788) : Exiting Master process...
Oct  2 08:40:13 np0005466030 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [ALERT]    (271788) : Current worker (271790) exited with code 143 (Terminated)
Oct  2 08:40:13 np0005466030 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [WARNING]  (271788) : All workers exited. Exiting... (0)
Oct  2 08:40:13 np0005466030 systemd[1]: libpod-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope: Deactivated successfully.
Oct  2 08:40:13 np0005466030 conmon[271784]: conmon c4bc0140ee7828efdecc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope/container/memory.events
Oct  2 08:40:13 np0005466030 podman[271874]: 2025-10-02 12:40:13.874011755 +0000 UTC m=+0.060740483 container died c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:40:13 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca-userdata-shm.mount: Deactivated successfully.
Oct  2 08:40:13 np0005466030 systemd[1]: var-lib-containers-storage-overlay-ca61862b6333cfd3d3887fbf1324742c9b2c5bd3ebde3922936d76af4c5a58c8-merged.mount: Deactivated successfully.
Oct  2 08:40:13 np0005466030 podman[271874]: 2025-10-02 12:40:13.935181208 +0000 UTC m=+0.121909926 container cleanup c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:40:13 np0005466030 systemd[1]: libpod-conmon-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope: Deactivated successfully.
Oct  2 08:40:14 np0005466030 podman[271921]: 2025-10-02 12:40:14.002790286 +0000 UTC m=+0.044751889 container remove c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.008 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bda872-50de-42b2-b670-3e15617561f3]: (4, ('Thu Oct  2 12:40:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d (c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca)\nc4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca\nThu Oct  2 12:40:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d (c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca)\nc4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.010 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc11aa79-a367-48ef-b40d-d46a0eaeb5cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.011 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17002dea-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:40:14 np0005466030 nova_compute[230518]: 2025-10-02 12:40:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:14 np0005466030 kernel: tap17002dea-50: left promiscuous mode
Oct  2 08:40:14 np0005466030 nova_compute[230518]: 2025-10-02 12:40:14.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.028 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[63bb4682-8d1d-48dc-80bb-07ef53e0a62d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.060 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c90e6814-f4da-4182-8c92-e5665de28a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.062 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da2f6056-56ad-4c7a-b15e-47b97d7a7248]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.083 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e69ec05-a08b-433d-8415-b0d00cfbfc97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667088, 'reachable_time': 33985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271936, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.086 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:40:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.087 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[691712bb-59d4-4d26-9150-17016ee84795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:40:14 np0005466030 systemd[1]: run-netns-ovnmeta\x2d17002dea\x2d5c5e\x2d46a6\x2d892d\x2dd32d33c1f02d.mount: Deactivated successfully.
Oct  2 08:40:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:15.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:15 np0005466030 nova_compute[230518]: 2025-10-02 12:40:15.800 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-unplugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:15 np0005466030 nova_compute[230518]: 2025-10-02 12:40:15.801 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:15 np0005466030 nova_compute[230518]: 2025-10-02 12:40:15.801 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:15 np0005466030 nova_compute[230518]: 2025-10-02 12:40:15.802 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:15 np0005466030 nova_compute[230518]: 2025-10-02 12:40:15.802 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] No waiting events found dispatching network-vif-unplugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:40:15 np0005466030 nova_compute[230518]: 2025-10-02 12:40:15.803 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-unplugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:40:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:40:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:40:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:17.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:40:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:17.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:40:17 np0005466030 nova_compute[230518]: 2025-10-02 12:40:17.589 2 INFO nova.virt.libvirt.driver [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deleting instance files /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80_del#033[00m
Oct  2 08:40:17 np0005466030 nova_compute[230518]: 2025-10-02 12:40:17.590 2 INFO nova.virt.libvirt.driver [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deletion of /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80_del complete#033[00m
Oct  2 08:40:17 np0005466030 nova_compute[230518]: 2025-10-02 12:40:17.726 2 INFO nova.compute.manager [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 4.28 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:40:17 np0005466030 nova_compute[230518]: 2025-10-02 12:40:17.727 2 DEBUG oslo.service.loopingcall [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:40:17 np0005466030 nova_compute[230518]: 2025-10-02 12:40:17.728 2 DEBUG nova.compute.manager [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:40:17 np0005466030 nova_compute[230518]: 2025-10-02 12:40:17.728 2 DEBUG nova.network.neutron [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.271 2 DEBUG nova.compute.manager [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.272 2 DEBUG oslo_concurrency.lockutils [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.272 2 DEBUG oslo_concurrency.lockutils [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.273 2 DEBUG oslo_concurrency.lockutils [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.273 2 DEBUG nova.compute.manager [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] No waiting events found dispatching network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.273 2 WARNING nova.compute.manager [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received unexpected event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:40:18 np0005466030 nova_compute[230518]: 2025-10-02 12:40:18.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:19.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:19.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:19 np0005466030 nova_compute[230518]: 2025-10-02 12:40:19.950 2 DEBUG nova.network.neutron [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.028 2 INFO nova.compute.manager [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 2.30 seconds to deallocate network for instance.#033[00m
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.172 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.172 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.246 2 DEBUG oslo_concurrency.processutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813702915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.681 2 DEBUG oslo_concurrency.processutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.690 2 DEBUG nova.compute.provider_tree [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.844 2 DEBUG nova.scheduler.client.report [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:40:20 np0005466030 nova_compute[230518]: 2025-10-02 12:40:20.931 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:21 np0005466030 nova_compute[230518]: 2025-10-02 12:40:21.004 2 INFO nova.scheduler.client.report [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Deleted allocations for instance 104be830-8fcd-47dd-a2b4-f92b66dcbc80#033[00m
Oct  2 08:40:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:21.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:21 np0005466030 nova_compute[230518]: 2025-10-02 12:40:21.262 2 DEBUG nova.compute.manager [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-deleted-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:40:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:21 np0005466030 nova_compute[230518]: 2025-10-02 12:40:21.299 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:23.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:23 np0005466030 nova_compute[230518]: 2025-10-02 12:40:23.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:23 np0005466030 nova_compute[230518]: 2025-10-02 12:40:23.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:25.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:25.937 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:25.937 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:40:25.938 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:27.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:27.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:28 np0005466030 nova_compute[230518]: 2025-10-02 12:40:28.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:28 np0005466030 nova_compute[230518]: 2025-10-02 12:40:28.682 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408813.6819274, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:40:28 np0005466030 nova_compute[230518]: 2025-10-02 12:40:28.683 2 INFO nova.compute.manager [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:40:28 np0005466030 nova_compute[230518]: 2025-10-02 12:40:28.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:29.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:31.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:31 np0005466030 podman[272011]: 2025-10-02 12:40:31.851503202 +0000 UTC m=+0.083054573 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:40:31 np0005466030 podman[272010]: 2025-10-02 12:40:31.862313672 +0000 UTC m=+0.108871494 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 08:40:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:40:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:33.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:40:33 np0005466030 nova_compute[230518]: 2025-10-02 12:40:33.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:33.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:33 np0005466030 nova_compute[230518]: 2025-10-02 12:40:33.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:35.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:35.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:37.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:37.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:38 np0005466030 nova_compute[230518]: 2025-10-02 12:40:38.094 2 DEBUG nova.compute.manager [None req-2f41a135-e1d7-48be-92f7-4bd37b124aa8 - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:40:38 np0005466030 nova_compute[230518]: 2025-10-02 12:40:38.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:38 np0005466030 nova_compute[230518]: 2025-10-02 12:40:38.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:39.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:43 np0005466030 nova_compute[230518]: 2025-10-02 12:40:43.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:43.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:43 np0005466030 podman[272056]: 2025-10-02 12:40:43.795176157 +0000 UTC m=+0.053295857 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:40:43 np0005466030 podman[272057]: 2025-10-02 12:40:43.801432663 +0000 UTC m=+0.056043023 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:40:43 np0005466030 nova_compute[230518]: 2025-10-02 12:40:43.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:40:44Z|00440|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:40:44 np0005466030 nova_compute[230518]: 2025-10-02 12:40:44.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:45.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:47.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:48 np0005466030 nova_compute[230518]: 2025-10-02 12:40:48.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:48 np0005466030 nova_compute[230518]: 2025-10-02 12:40:48.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:49.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:51.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:51.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:53 np0005466030 nova_compute[230518]: 2025-10-02 12:40:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:53 np0005466030 nova_compute[230518]: 2025-10-02 12:40:53.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:53.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:53.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:53 np0005466030 nova_compute[230518]: 2025-10-02 12:40:53.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:55.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:56 np0005466030 nova_compute[230518]: 2025-10-02 12:40:56.189 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.289 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.290 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.290 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.290 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.291 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:40:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:57.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3880091232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:57 np0005466030 nova_compute[230518]: 2025-10-02 12:40:57.722 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.941922) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857941969, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2421, "num_deletes": 254, "total_data_size": 5721580, "memory_usage": 5799424, "flush_reason": "Manual Compaction"}
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857974636, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3742892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44404, "largest_seqno": 46820, "table_properties": {"data_size": 3733017, "index_size": 6241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21101, "raw_average_key_size": 20, "raw_value_size": 3713052, "raw_average_value_size": 3658, "num_data_blocks": 271, "num_entries": 1015, "num_filter_entries": 1015, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408657, "oldest_key_time": 1759408657, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 32751 microseconds, and 12965 cpu microseconds.
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.974674) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3742892 bytes OK
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.974693) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.978994) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.979007) EVENT_LOG_v1 {"time_micros": 1759408857979003, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.979022) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5710751, prev total WAL file size 5710751, number of live WAL files 2.
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.980222) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3655KB)], [87(9402KB)]
Oct  2 08:40:57 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857980253, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13371274, "oldest_snapshot_seqno": -1}
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7134 keys, 11443223 bytes, temperature: kUnknown
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858075562, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11443223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11394681, "index_size": 29614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 183721, "raw_average_key_size": 25, "raw_value_size": 11266366, "raw_average_value_size": 1579, "num_data_blocks": 1173, "num_entries": 7134, "num_filter_entries": 7134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.075854) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11443223 bytes
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.078616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.1 rd, 119.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 7662, records dropped: 528 output_compression: NoCompression
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.078635) EVENT_LOG_v1 {"time_micros": 1759408858078625, "job": 54, "event": "compaction_finished", "compaction_time_micros": 95471, "compaction_time_cpu_micros": 22431, "output_level": 6, "num_output_files": 1, "total_output_size": 11443223, "num_input_records": 7662, "num_output_records": 7134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858079749, "job": 54, "event": "table_file_deletion", "file_number": 89}
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858081673, "job": 54, "event": "table_file_deletion", "file_number": 87}
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.980145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.111 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.111 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.246 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4271MB free_disk=20.742088317871094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.247 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.247 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.710 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.711 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.711 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.815 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.884 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.885 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.903 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.922 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:40:58 np0005466030 nova_compute[230518]: 2025-10-02 12:40:58.971 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:40:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:40:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:40:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:59.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:40:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:40:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2487088342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:40:59 np0005466030 nova_compute[230518]: 2025-10-02 12:40:59.409 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:40:59 np0005466030 nova_compute[230518]: 2025-10-02 12:40:59.415 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:40:59 np0005466030 nova_compute[230518]: 2025-10-02 12:40:59.568 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:00Z|00441|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:41:00 np0005466030 nova_compute[230518]: 2025-10-02 12:41:00.029 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:41:00 np0005466030 nova_compute[230518]: 2025-10-02 12:41:00.029 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:00 np0005466030 nova_compute[230518]: 2025-10-02 12:41:00.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466030 nova_compute[230518]: 2025-10-02 12:41:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:01 np0005466030 nova_compute[230518]: 2025-10-02 12:41:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:01 np0005466030 nova_compute[230518]: 2025-10-02 12:41:01.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:01 np0005466030 nova_compute[230518]: 2025-10-02 12:41:01.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:41:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:01.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:01.376 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:01.378 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:41:01 np0005466030 nova_compute[230518]: 2025-10-02 12:41:01.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.531 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.531 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.602 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:41:02 np0005466030 podman[272140]: 2025-10-02 12:41:02.79851309 +0000 UTC m=+0.051034156 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 08:41:02 np0005466030 podman[272139]: 2025-10-02 12:41:02.821334748 +0000 UTC m=+0.075444114 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.894 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.894 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.899 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:41:02 np0005466030 nova_compute[230518]: 2025-10-02 12:41:02.899 2 INFO nova.compute.claims [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.177 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:03.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3647303231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.681 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.686 2 DEBUG nova.compute.provider_tree [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.716 2 DEBUG nova.scheduler.client.report [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.761 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.762 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.874 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.874 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.915 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:41:03 np0005466030 nova_compute[230518]: 2025-10-02 12:41:03.938 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.067 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.069 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.069 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Creating image(s)#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.105 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.143 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.178 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.182 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.239 2 DEBUG nova.policy [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ef7a5dbc3524ee8a7efcd0d3ae36787', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a82ed194b379425aa5e1f31b993eee81', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.277 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.278 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.279 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.279 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.314 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:04 np0005466030 nova_compute[230518]: 2025-10-02 12:41:04.318 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:05.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.431 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.541 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] resizing rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.616 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully created port: 007233fd-556d-43ce-97fa-0f19306ba0aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.762 2 DEBUG nova.objects.instance [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d8c4b3b-58c2-4d3d-863c-49b98333b84d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.782 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.783 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Ensure instance console log exists: /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.783 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.784 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:05 np0005466030 nova_compute[230518]: 2025-10-02 12:41:05.784 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:07 np0005466030 nova_compute[230518]: 2025-10-02 12:41:07.258 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully created port: 1688d119-1bc8-410a-a80e-8536a113e986 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:41:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.094 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.333 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.334 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.334 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.335 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.648 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully updated port: 007233fd-556d-43ce-97fa-0f19306ba0aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:41:08 np0005466030 nova_compute[230518]: 2025-10-02 12:41:08.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:09 np0005466030 nova_compute[230518]: 2025-10-02 12:41:09.102 2 DEBUG nova.compute.manager [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-changed-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:09 np0005466030 nova_compute[230518]: 2025-10-02 12:41:09.103 2 DEBUG nova.compute.manager [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing instance network info cache due to event network-changed-007233fd-556d-43ce-97fa-0f19306ba0aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:41:09 np0005466030 nova_compute[230518]: 2025-10-02 12:41:09.104 2 DEBUG oslo_concurrency.lockutils [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:09 np0005466030 nova_compute[230518]: 2025-10-02 12:41:09.104 2 DEBUG oslo_concurrency.lockutils [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:09 np0005466030 nova_compute[230518]: 2025-10-02 12:41:09.105 2 DEBUG nova.network.neutron [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing network info cache for port 007233fd-556d-43ce-97fa-0f19306ba0aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:41:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:09.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:09.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:09 np0005466030 nova_compute[230518]: 2025-10-02 12:41:09.405 2 DEBUG nova.network.neutron [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:41:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Oct  2 08:41:10 np0005466030 nova_compute[230518]: 2025-10-02 12:41:10.289 2 DEBUG nova.network.neutron [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:10 np0005466030 nova_compute[230518]: 2025-10-02 12:41:10.312 2 DEBUG oslo_concurrency.lockutils [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:10.381 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:11 np0005466030 nova_compute[230518]: 2025-10-02 12:41:11.138 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:11 np0005466030 nova_compute[230518]: 2025-10-02 12:41:11.169 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:11 np0005466030 nova_compute[230518]: 2025-10-02 12:41:11.170 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:41:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999983s ======
Oct  2 08:41:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:11.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999983s
Oct  2 08:41:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:11.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:12 np0005466030 nova_compute[230518]: 2025-10-02 12:41:12.248 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully updated port: 1688d119-1bc8-410a-a80e-8536a113e986 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:41:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Oct  2 08:41:12 np0005466030 nova_compute[230518]: 2025-10-02 12:41:12.469 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:12 np0005466030 nova_compute[230518]: 2025-10-02 12:41:12.470 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquired lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:12 np0005466030 nova_compute[230518]: 2025-10-02 12:41:12.470 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:41:12 np0005466030 nova_compute[230518]: 2025-10-02 12:41:12.967 2 DEBUG nova.compute.manager [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-changed-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:12 np0005466030 nova_compute[230518]: 2025-10-02 12:41:12.968 2 DEBUG nova.compute.manager [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing instance network info cache due to event network-changed-1688d119-1bc8-410a-a80e-8536a113e986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:41:12 np0005466030 nova_compute[230518]: 2025-10-02 12:41:12.968 2 DEBUG oslo_concurrency.lockutils [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:13 np0005466030 nova_compute[230518]: 2025-10-02 12:41:13.142 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:41:13 np0005466030 nova_compute[230518]: 2025-10-02 12:41:13.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:13.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:13.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Oct  2 08:41:13 np0005466030 nova_compute[230518]: 2025-10-02 12:41:13.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:14 np0005466030 podman[272372]: 2025-10-02 12:41:14.838712572 +0000 UTC m=+0.075116284 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:41:14 np0005466030 podman[272373]: 2025-10-02 12:41:14.866433992 +0000 UTC m=+0.090519837 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:41:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:15.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.113 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:17.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.367 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Releasing lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.367 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance network_info: |[{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.368 2 DEBUG oslo_concurrency.lockutils [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.368 2 DEBUG nova.network.neutron [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing network info cache for port 1688d119-1bc8-410a-a80e-8536a113e986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.372 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start _get_guest_xml network_info=[{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.378 2 WARNING nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.383 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.385 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.390 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.390 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.392 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.393 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.393 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.394 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.394 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.395 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.395 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.396 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.396 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.397 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.397 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.398 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.402 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1150966713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.892 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.933 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:17 np0005466030 nova_compute[230518]: 2025-10-02 12:41:17.944 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 podman[272645]: 2025-10-02 12:41:18.421776378 +0000 UTC m=+0.201276280 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.437 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.440 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.440 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.441 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.442 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.442 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.442 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.444 2 DEBUG nova.objects.instance [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d8c4b3b-58c2-4d3d-863c-49b98333b84d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:18 np0005466030 podman[272645]: 2025-10-02 12:41:18.543770244 +0000 UTC m=+0.323270146 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.568 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <uuid>4d8c4b3b-58c2-4d3d-863c-49b98333b84d</uuid>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <name>instance-0000006e</name>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersTestMultiNic-server-1784036592</nova:name>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:41:17</nova:creationTime>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:user uuid="9ef7a5dbc3524ee8a7efcd0d3ae36787">tempest-ServersTestMultiNic-2055566246-project-member</nova:user>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:project uuid="a82ed194b379425aa5e1f31b993eee81">tempest-ServersTestMultiNic-2055566246</nova:project>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:port uuid="007233fd-556d-43ce-97fa-0f19306ba0aa">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.118" ipVersion="4"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <nova:port uuid="1688d119-1bc8-410a-a80e-8536a113e986">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.1.129" ipVersion="4"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <entry name="serial">4d8c4b3b-58c2-4d3d-863c-49b98333b84d</entry>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <entry name="uuid">4d8c4b3b-58c2-4d3d-863c-49b98333b84d</entry>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:15:1b:4f"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <target dev="tap007233fd-55"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:df:fd:ef"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <target dev="tap1688d119-1b"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/console.log" append="off"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:41:18 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:41:18 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:41:18 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:41:18 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.569 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Preparing to wait for external event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.570 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.570 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.571 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.571 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Preparing to wait for external event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.572 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.572 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.573 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.574 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.575 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.576 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.577 2 DEBUG os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap007233fd-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap007233fd-55, col_values=(('external_ids', {'iface-id': '007233fd-556d-43ce-97fa-0f19306ba0aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:1b:4f', 'vm-uuid': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 NetworkManager[44960]: <info>  [1759408878.6346] manager: (tap007233fd-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.645 2 INFO os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55')#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.647 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.647 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.648 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.649 2 DEBUG os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1688d119-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1688d119-1b, col_values=(('external_ids', {'iface-id': '1688d119-1bc8-410a-a80e-8536a113e986', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:fd:ef', 'vm-uuid': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 NetworkManager[44960]: <info>  [1759408878.6590] manager: (tap1688d119-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:18 np0005466030 nova_compute[230518]: 2025-10-02 12:41:18.667 2 INFO os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b')#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.013 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.014 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.014 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No VIF found with MAC fa:16:3e:15:1b:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.015 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No VIF found with MAC fa:16:3e:df:fd:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.016 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Using config drive#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.067 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:19.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.550 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Creating config drive at /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.559 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8fe8tpm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.699 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8fe8tpm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.736 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:19 np0005466030 nova_compute[230518]: 2025-10-02 12:41:19.741 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.253 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.254 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deleting local config drive /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config because it was imported into RBD.#033[00m
Oct  2 08:41:20 np0005466030 kernel: tap007233fd-55: entered promiscuous mode
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.2985] manager: (tap007233fd-55): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00442|binding|INFO|Claiming lport 007233fd-556d-43ce-97fa-0f19306ba0aa for this chassis.
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00443|binding|INFO|007233fd-556d-43ce-97fa-0f19306ba0aa: Claiming fa:16:3e:15:1b:4f 10.100.0.118
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.3203] manager: (tap1688d119-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct  2 08:41:20 np0005466030 systemd-udevd[272978]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:20 np0005466030 systemd-udevd[272977]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:20 np0005466030 kernel: tap1688d119-1b: entered promiscuous mode
Oct  2 08:41:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.3397] device (tap007233fd-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00444|if_status|INFO|Not updating pb chassis for 1688d119-1bc8-410a-a80e-8536a113e986 now as sb is readonly
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.3407] device (tap007233fd-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.3473] device (tap1688d119-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.3482] device (tap1688d119-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:41:20 np0005466030 systemd-machined[188247]: New machine qemu-54-instance-0000006e.
Oct  2 08:41:20 np0005466030 systemd[1]: Started Virtual Machine qemu-54-instance-0000006e.
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00445|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00446|binding|INFO|Claiming lport 1688d119-1bc8-410a-a80e-8536a113e986 for this chassis.
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00447|binding|INFO|1688d119-1bc8-410a-a80e-8536a113e986: Claiming fa:16:3e:df:fd:ef 10.100.1.129
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.427 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:1b:4f 10.100.0.118'], port_security=['fa:16:3e:15:1b:4f 10.100.0.118'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.118/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb774493-1e03-4988-a332-4e7f3684ace8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30275ad8-63fb-492c-8ae6-6d69bb1e285c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=007233fd-556d-43ce-97fa-0f19306ba0aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.429 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 007233fd-556d-43ce-97fa-0f19306ba0aa in datapath fb774493-1e03-4988-a332-4e7f3684ace8 bound to our chassis#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.430 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb774493-1e03-4988-a332-4e7f3684ace8#033[00m
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00448|binding|INFO|Setting lport 007233fd-556d-43ce-97fa-0f19306ba0aa ovn-installed in OVS
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00449|binding|INFO|Setting lport 007233fd-556d-43ce-97fa-0f19306ba0aa up in Southbound
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.443 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe07b53-83ec-48a3-932d-b34a9ca5a3ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.444 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb774493-11 in ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.446 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb774493-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.446 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4f778e-11d8-4ecb-8207-655128b6ceeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.446 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9122fe6d-ce9d-479f-b9a7-1f5d150b7ea7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.457 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[711449e3-30e7-4e96-bdf9-adf0786710b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00450|binding|INFO|Setting lport 1688d119-1bc8-410a-a80e-8536a113e986 ovn-installed in OVS
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.485 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[15eb3f76-f488-4dbd-8be7-60b4e01f62c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.514 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fd:ef 10.100.1.129'], port_security=['fa:16:3e:df:fd:ef 10.100.1.129'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.129/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6536e1f6-8914-462b-bd28-3b66d21243dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96f8db4a-8d74-4e13-9b5c-7bb0fe283c21, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1688d119-1bc8-410a-a80e-8536a113e986) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00451|binding|INFO|Setting lport 1688d119-1bc8-410a-a80e-8536a113e986 up in Southbound
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.519 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c426186-8fe2-4b92-9cb7-f8752b5adaee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.525 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f34cdf-6263-44d5-8754-419a8a43432b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.5262] manager: (tapfb774493-10): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Oct  2 08:41:20 np0005466030 systemd-udevd[272981]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.565 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3a18d769-dc2a-45c8-a6f6-c50947652466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.568 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8b07cb-fba0-4628-ba07-1a39634a3528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.5952] device (tapfb774493-10): carrier: link connected
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.600 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[de689ae1-7034-422e-9838-55c53fcb1e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.617 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[479b3e55-d475-43d8-8661-a65f23a65a07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb774493-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b8:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674415, 'reachable_time': 38654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273014, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.634 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0679d263-280d-481b-b48a-a7488863ba2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:b8a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674415, 'tstamp': 674415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273015, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.653 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[280cdfcc-5959-4638-8b7b-01cfbf92b556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb774493-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b8:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674415, 'reachable_time': 38654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273016, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af241a16-6f99-44c9-81ac-bbf0ba15f9c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.755 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a05975-9830-48ca-a92f-0784214d1f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.757 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb774493-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.758 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.759 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb774493-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 kernel: tapfb774493-10: entered promiscuous mode
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 NetworkManager[44960]: <info>  [1759408880.7641] manager: (tapfb774493-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.770 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb774493-10, col_values=(('external_ids', {'iface-id': 'aa3d5e72-c8a0-4a99-8a41-fff6fec251ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:20Z|00452|binding|INFO|Releasing lport aa3d5e72-c8a0-4a99-8a41-fff6fec251ca from this chassis (sb_readonly=0)
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.774 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb774493-1e03-4988-a332-4e7f3684ace8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb774493-1e03-4988-a332-4e7f3684ace8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.776 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcf357f-a90e-40cf-b880-da29641b7dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.777 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-fb774493-1e03-4988-a332-4e7f3684ace8
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/fb774493-1e03-4988-a332-4e7f3684ace8.pid.haproxy
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID fb774493-1e03-4988-a332-4e7f3684ace8
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:41:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.781 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'env', 'PROCESS_TAG=haproxy-fb774493-1e03-4988-a332-4e7f3684ace8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb774493-1e03-4988-a332-4e7f3684ace8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:41:20 np0005466030 nova_compute[230518]: 2025-10-02 12:41:20.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Oct  2 08:41:21 np0005466030 podman[273091]: 2025-10-02 12:41:21.230619381 +0000 UTC m=+0.071201979 container create a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:41:21 np0005466030 systemd[1]: Started libpod-conmon-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e.scope.
Oct  2 08:41:21 np0005466030 podman[273091]: 2025-10-02 12:41:21.193150493 +0000 UTC m=+0.033733111 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:41:21 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:41:21 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca4dd895614093da27e3e57abc7da7e11453af00534b25663f8904c6c64dc69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:41:21 np0005466030 podman[273091]: 2025-10-02 12:41:21.333550108 +0000 UTC m=+0.174132796 container init a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:41:21 np0005466030 podman[273091]: 2025-10-02 12:41:21.340004141 +0000 UTC m=+0.180586779 container start a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:41:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:21.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:41:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:41:21 np0005466030 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : New worker (273112) forked
Oct  2 08:41:21 np0005466030 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : Loading success.
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.402 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1688d119-1bc8-410a-a80e-8536a113e986 in datapath 6536e1f6-8914-462b-bd28-3b66d21243dc unbound from our chassis#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.405 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6536e1f6-8914-462b-bd28-3b66d21243dc#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.417 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c250c997-3dd7-44dd-abe6-35c12fa23460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.417 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6536e1f6-81 in ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.420 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6536e1f6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.420 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6c0d34-6ecf-4d06-aa08-e3020f619edb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.421 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a038d1-69d7-4063-ab66-a07ffad0cb6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.436 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdfed5e-6803-415d-8f9e-1ad147bbb88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.450 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e3e271-e656-4948-8352-730042ee5d5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.457 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408881.4572148, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.458 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Started (Lifecycle Event)#033[00m
Oct  2 08:41:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:41:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.491 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[276d09a1-9e92-462f-902a-fb104066a0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.493 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.497 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408881.4599757, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.497 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:41:21 np0005466030 systemd-udevd[273007]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:21 np0005466030 NetworkManager[44960]: <info>  [1759408881.5017] manager: (tap6536e1f6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.501 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee11a3c-4705-48c2-8371-9546c2d87182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.539 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[68d08004-a6c8-4623-97a0-54553a7a22b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.542 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[268802ee-ae13-4389-8296-45e2d8b33445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.543 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.546 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:21 np0005466030 NetworkManager[44960]: <info>  [1759408881.5774] device (tap6536e1f6-80): carrier: link connected
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.584 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[29e271fb-8b51-4bf0-9284-a969ddc196d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.603 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c8c6ef-5dd0-4aea-9a1e-dc146cbde46f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6536e1f6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:c4:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674513, 'reachable_time': 30107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273131, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.616 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cf32d4-b56e-442c-8bee-0051a6d4947f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:c49b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674513, 'tstamp': 674513}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273132, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.630 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31535181-534c-46b1-a29f-60528896419b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6536e1f6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:c4:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674513, 'reachable_time': 30107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273133, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.651 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.660 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[08098629-e34c-4164-97d8-16543c8c4503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.715 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f5343-8b58-40b1-80af-689731945987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.716 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6536e1f6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.716 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.717 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6536e1f6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:21 np0005466030 kernel: tap6536e1f6-80: entered promiscuous mode
Oct  2 08:41:21 np0005466030 NetworkManager[44960]: <info>  [1759408881.7198] manager: (tap6536e1f6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.725 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6536e1f6-80, col_values=(('external_ids', {'iface-id': 'da0b8468-e646-4bba-8522-c8959170677a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:21 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:21Z|00453|binding|INFO|Releasing lport da0b8468-e646-4bba-8522-c8959170677a from this chassis (sb_readonly=0)
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.731 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6536e1f6-8914-462b-bd28-3b66d21243dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6536e1f6-8914-462b-bd28-3b66d21243dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.732 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1ba010-b0ef-449e-8a52-442d4ad140f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.733 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-6536e1f6-8914-462b-bd28-3b66d21243dc
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/6536e1f6-8914-462b-bd28-3b66d21243dc.pid.haproxy
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 6536e1f6-8914-462b-bd28-3b66d21243dc
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:41:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.734 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'env', 'PROCESS_TAG=haproxy-6536e1f6-8914-462b-bd28-3b66d21243dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6536e1f6-8914-462b-bd28-3b66d21243dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:41:21 np0005466030 nova_compute[230518]: 2025-10-02 12:41:21.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.082 2 DEBUG nova.network.neutron [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updated VIF entry in instance network info cache for port 1688d119-1bc8-410a-a80e-8536a113e986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.083 2 DEBUG nova.network.neutron [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:22 np0005466030 podman[273164]: 2025-10-02 12:41:22.108356271 +0000 UTC m=+0.072416058 container create 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.123 2 DEBUG oslo_concurrency.lockutils [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:22 np0005466030 podman[273164]: 2025-10-02 12:41:22.066168925 +0000 UTC m=+0.030228772 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:41:22 np0005466030 systemd[1]: Started libpod-conmon-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f.scope.
Oct  2 08:41:22 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:41:22 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6e86809a8af64e19198f2ff488e9d14b554026026aa3351bf2e01639a78b89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:41:22 np0005466030 podman[273164]: 2025-10-02 12:41:22.194858841 +0000 UTC m=+0.158918628 container init 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:41:22 np0005466030 podman[273164]: 2025-10-02 12:41:22.199650722 +0000 UTC m=+0.163710479 container start 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:41:22 np0005466030 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : New worker (273185) forked
Oct  2 08:41:22 np0005466030 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : Loading success.
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.434 2 DEBUG nova.compute.manager [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.436 2 DEBUG oslo_concurrency.lockutils [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.436 2 DEBUG oslo_concurrency.lockutils [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.437 2 DEBUG oslo_concurrency.lockutils [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:22 np0005466030 nova_compute[230518]: 2025-10-02 12:41:22.438 2 DEBUG nova.compute.manager [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Processing event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:41:23 np0005466030 nova_compute[230518]: 2025-10-02 12:41:23.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:23.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:23 np0005466030 nova_compute[230518]: 2025-10-02 12:41:23.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468430068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.614 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.615 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.615 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.615 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No event matching network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa in dict_keys([('network-vif-plugged', '1688d119-1bc8-410a-a80e-8536a113e986')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.616 2 WARNING nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Processing event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.617 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.622 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408884.6226764, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.623 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.625 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.628 2 INFO nova.virt.libvirt.driver [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance spawned successfully.#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.629 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.709 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.713 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.714 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.714 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.715 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.715 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.715 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:24 np0005466030 nova_compute[230518]: 2025-10-02 12:41:24.727 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:25 np0005466030 nova_compute[230518]: 2025-10-02 12:41:25.011 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:41:25 np0005466030 nova_compute[230518]: 2025-10-02 12:41:25.204 2 INFO nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 21.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:41:25 np0005466030 nova_compute[230518]: 2025-10-02 12:41:25.205 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:25.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:25 np0005466030 nova_compute[230518]: 2025-10-02 12:41:25.393 2 INFO nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 22.52 seconds to build instance.#033[00m
Oct  2 08:41:25 np0005466030 nova_compute[230518]: 2025-10-02 12:41:25.481 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:25.938 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:25.939 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:26 np0005466030 nova_compute[230518]: 2025-10-02 12:41:26.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:27 np0005466030 nova_compute[230518]: 2025-10-02 12:41:27.070 2 DEBUG nova.compute.manager [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:27 np0005466030 nova_compute[230518]: 2025-10-02 12:41:27.071 2 DEBUG oslo_concurrency.lockutils [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:27 np0005466030 nova_compute[230518]: 2025-10-02 12:41:27.072 2 DEBUG oslo_concurrency.lockutils [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:27 np0005466030 nova_compute[230518]: 2025-10-02 12:41:27.072 2 DEBUG oslo_concurrency.lockutils [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:27 np0005466030 nova_compute[230518]: 2025-10-02 12:41:27.072 2 DEBUG nova.compute.manager [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:27 np0005466030 nova_compute[230518]: 2025-10-02 12:41:27.073 2 WARNING nova.compute.manager [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:41:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.272 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.274 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.274 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.275 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.276 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.277 2 INFO nova.compute.manager [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Terminating instance#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.279 2 DEBUG nova.compute.manager [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 kernel: tap007233fd-55 (unregistering): left promiscuous mode
Oct  2 08:41:28 np0005466030 NetworkManager[44960]: <info>  [1759408888.3305] device (tap007233fd-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00454|binding|INFO|Releasing lport 007233fd-556d-43ce-97fa-0f19306ba0aa from this chassis (sb_readonly=0)
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00455|binding|INFO|Setting lport 007233fd-556d-43ce-97fa-0f19306ba0aa down in Southbound
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00456|binding|INFO|Removing iface tap007233fd-55 ovn-installed in OVS
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 kernel: tap1688d119-1b (unregistering): left promiscuous mode
Oct  2 08:41:28 np0005466030 NetworkManager[44960]: <info>  [1759408888.3721] device (tap1688d119-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00457|binding|INFO|Releasing lport 1688d119-1bc8-410a-a80e-8536a113e986 from this chassis (sb_readonly=1)
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00458|binding|INFO|Removing iface tap1688d119-1b ovn-installed in OVS
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00459|if_status|INFO|Dropped 1 log messages in last 1267 seconds (most recently, 1267 seconds ago) due to excessive rate
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00460|if_status|INFO|Not setting lport 1688d119-1bc8-410a-a80e-8536a113e986 down as sb is readonly
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct  2 08:41:28 np0005466030 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006e.scope: Consumed 4.745s CPU time.
Oct  2 08:41:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:28Z|00461|binding|INFO|Setting lport 1688d119-1bc8-410a-a80e-8536a113e986 down in Southbound
Oct  2 08:41:28 np0005466030 systemd-machined[188247]: Machine qemu-54-instance-0000006e terminated.
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.441 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:1b:4f 10.100.0.118'], port_security=['fa:16:3e:15:1b:4f 10.100.0.118'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.118/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb774493-1e03-4988-a332-4e7f3684ace8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30275ad8-63fb-492c-8ae6-6d69bb1e285c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=007233fd-556d-43ce-97fa-0f19306ba0aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.443 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 007233fd-556d-43ce-97fa-0f19306ba0aa in datapath fb774493-1e03-4988-a332-4e7f3684ace8 unbound from our chassis#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.447 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb774493-1e03-4988-a332-4e7f3684ace8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.448 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3b01c1-6447-4a16-b559-2b5fd6024cc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.449 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 namespace which is not needed anymore#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.486 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fd:ef 10.100.1.129'], port_security=['fa:16:3e:df:fd:ef 10.100.1.129'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.129/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6536e1f6-8914-462b-bd28-3b66d21243dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96f8db4a-8d74-4e13-9b5c-7bb0fe283c21, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1688d119-1bc8-410a-a80e-8536a113e986) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:28 np0005466030 NetworkManager[44960]: <info>  [1759408888.5203] manager: (tap1688d119-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.538 2 INFO nova.virt.libvirt.driver [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance destroyed successfully.#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.539 2 DEBUG nova.objects.instance [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'resources' on Instance uuid 4d8c4b3b-58c2-4d3d-863c-49b98333b84d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : haproxy version is 2.8.14-c23fe91
Oct  2 08:41:28 np0005466030 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : path to executable is /usr/sbin/haproxy
Oct  2 08:41:28 np0005466030 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [WARNING]  (273110) : Exiting Master process...
Oct  2 08:41:28 np0005466030 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [ALERT]    (273110) : Current worker (273112) exited with code 143 (Terminated)
Oct  2 08:41:28 np0005466030 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [WARNING]  (273110) : All workers exited. Exiting... (0)
Oct  2 08:41:28 np0005466030 systemd[1]: libpod-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e.scope: Deactivated successfully.
Oct  2 08:41:28 np0005466030 podman[273235]: 2025-10-02 12:41:28.67282147 +0000 UTC m=+0.067439092 container died a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:41:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay-9ca4dd895614093da27e3e57abc7da7e11453af00534b25663f8904c6c64dc69-merged.mount: Deactivated successfully.
Oct  2 08:41:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e-userdata-shm.mount: Deactivated successfully.
Oct  2 08:41:28 np0005466030 podman[273235]: 2025-10-02 12:41:28.734403196 +0000 UTC m=+0.129020818 container cleanup a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:41:28 np0005466030 systemd[1]: libpod-conmon-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e.scope: Deactivated successfully.
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.748 2 DEBUG nova.virt.libvirt.vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:25Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.750 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.751 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.752 2 DEBUG os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap007233fd-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.769 2 INFO os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55')#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.770 2 DEBUG nova.virt.libvirt.vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:25Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.771 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.772 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.773 2 DEBUG os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1688d119-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.782 2 INFO os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b')#033[00m
Oct  2 08:41:28 np0005466030 podman[273264]: 2025-10-02 12:41:28.818837311 +0000 UTC m=+0.062245178 container remove a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.832 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[800705f3-4515-4951-9d8c-3173879dec86]: (4, ('Thu Oct  2 12:41:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 (a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e)\na65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e\nThu Oct  2 12:41:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 (a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e)\na65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.834 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12c2da18-acf4-4f09-a047-95ea6cdc0e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.836 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb774493-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 kernel: tapfb774493-10: left promiscuous mode
Oct  2 08:41:28 np0005466030 nova_compute[230518]: 2025-10-02 12:41:28.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.900 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[24b6f280-9b84-4907-a622-3d0731af5ecd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.933 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e849e774-814c-4187-958f-fdf3186412fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.934 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7d2a02-ee95-44d8-b9f2-bb3b8c425765]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.959 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6792df92-a878-44f6-a3cc-e4ff4afa4715]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674406, 'reachable_time': 39649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273298, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 systemd[1]: run-netns-ovnmeta\x2dfb774493\x2d1e03\x2d4988\x2da332\x2d4e7f3684ace8.mount: Deactivated successfully.
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.965 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.966 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4b523e-be9e-4008-a56b-f67d6ffd7d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.967 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1688d119-1bc8-410a-a80e-8536a113e986 in datapath 6536e1f6-8914-462b-bd28-3b66d21243dc unbound from our chassis#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.969 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6536e1f6-8914-462b-bd28-3b66d21243dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.971 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[14abc2e3-0362-4f7b-be39-9faeda4be615]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.972 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc namespace which is not needed anymore#033[00m
Oct  2 08:41:29 np0005466030 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : haproxy version is 2.8.14-c23fe91
Oct  2 08:41:29 np0005466030 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : path to executable is /usr/sbin/haproxy
Oct  2 08:41:29 np0005466030 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [WARNING]  (273183) : Exiting Master process...
Oct  2 08:41:29 np0005466030 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [ALERT]    (273183) : Current worker (273185) exited with code 143 (Terminated)
Oct  2 08:41:29 np0005466030 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [WARNING]  (273183) : All workers exited. Exiting... (0)
Oct  2 08:41:29 np0005466030 systemd[1]: libpod-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f.scope: Deactivated successfully.
Oct  2 08:41:29 np0005466030 podman[273316]: 2025-10-02 12:41:29.147185226 +0000 UTC m=+0.046484773 container died 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:41:29 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f-userdata-shm.mount: Deactivated successfully.
Oct  2 08:41:29 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7e6e86809a8af64e19198f2ff488e9d14b554026026aa3351bf2e01639a78b89-merged.mount: Deactivated successfully.
Oct  2 08:41:29 np0005466030 podman[273316]: 2025-10-02 12:41:29.186876434 +0000 UTC m=+0.086175941 container cleanup 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:41:29 np0005466030 systemd[1]: libpod-conmon-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f.scope: Deactivated successfully.
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.219 2 DEBUG nova.compute.manager [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.220 2 DEBUG oslo_concurrency.lockutils [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.220 2 DEBUG oslo_concurrency.lockutils [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.220 2 DEBUG oslo_concurrency.lockutils [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.221 2 DEBUG nova.compute.manager [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-unplugged-007233fd-556d-43ce-97fa-0f19306ba0aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.221 2 DEBUG nova.compute.manager [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-007233fd-556d-43ce-97fa-0f19306ba0aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:41:29 np0005466030 podman[273347]: 2025-10-02 12:41:29.262064068 +0000 UTC m=+0.053387490 container remove 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.269 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f4a8d8-ac93-4b0b-8fd0-46d068d506b5]: (4, ('Thu Oct  2 12:41:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc (709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f)\n709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f\nThu Oct  2 12:41:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc (709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f)\n709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.271 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[052ce9ad-eab5-4602-b01a-d33e4373ca44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.272 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6536e1f6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:29 np0005466030 kernel: tap6536e1f6-80: left promiscuous mode
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[196b8773-6f64-42fe-857a-3aaed6cc7075]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:29 np0005466030 nova_compute[230518]: 2025-10-02 12:41:29.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.301 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[80354593-2c1d-4beb-826c-9e4d8a41638b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.303 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[65aa31a5-ce90-4090-9915-96e5c4f568c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.319 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[192d45af-7707-4559-a3ed-fc299270fdb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674504, 'reachable_time': 43527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273363, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.322 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:41:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.322 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[09f35639-1362-4489-a673-6fcc9abe9c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:29.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:29.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:29 np0005466030 systemd[1]: run-netns-ovnmeta\x2d6536e1f6\x2d8914\x2d462b\x2dbd28\x2d3b66d21243dc.mount: Deactivated successfully.
Oct  2 08:41:30 np0005466030 nova_compute[230518]: 2025-10-02 12:41:30.286 2 INFO nova.virt.libvirt.driver [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deleting instance files /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_del#033[00m
Oct  2 08:41:30 np0005466030 nova_compute[230518]: 2025-10-02 12:41:30.287 2 INFO nova.virt.libvirt.driver [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deletion of /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_del complete#033[00m
Oct  2 08:41:30 np0005466030 nova_compute[230518]: 2025-10-02 12:41:30.736 2 INFO nova.compute.manager [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 2.46 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:41:30 np0005466030 nova_compute[230518]: 2025-10-02 12:41:30.738 2 DEBUG oslo.service.loopingcall [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:41:30 np0005466030 nova_compute[230518]: 2025-10-02 12:41:30.738 2 DEBUG nova.compute.manager [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:41:30 np0005466030 nova_compute[230518]: 2025-10-02 12:41:30.738 2 DEBUG nova.network.neutron [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:41:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:41:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:31.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:31.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.392 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.393 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.394 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.394 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.395 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.395 2 WARNING nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.396 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.396 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.397 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.397 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.397 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-unplugged-1688d119-1bc8-410a-a80e-8536a113e986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.398 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-1688d119-1bc8-410a-a80e-8536a113e986 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.398 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.399 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.399 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.400 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.400 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:31 np0005466030 nova_compute[230518]: 2025-10-02 12:41:31.401 2 WARNING nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:41:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:33 np0005466030 nova_compute[230518]: 2025-10-02 12:41:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:33 np0005466030 nova_compute[230518]: 2025-10-02 12:41:33.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:41:33 np0005466030 nova_compute[230518]: 2025-10-02 12:41:33.115 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:41:33 np0005466030 nova_compute[230518]: 2025-10-02 12:41:33.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:33.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:33.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:33 np0005466030 nova_compute[230518]: 2025-10-02 12:41:33.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:33 np0005466030 podman[273415]: 2025-10-02 12:41:33.813319891 +0000 UTC m=+0.062942620 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:41:33 np0005466030 podman[273414]: 2025-10-02 12:41:33.845788602 +0000 UTC m=+0.095125122 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.346 2 DEBUG nova.compute.manager [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-deleted-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.346 2 INFO nova.compute.manager [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Neutron deleted interface 1688d119-1bc8-410a-a80e-8536a113e986; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.347 2 DEBUG nova.network.neutron [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.634 2 DEBUG nova.compute.manager [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Detach interface failed, port_id=1688d119-1bc8-410a-a80e-8536a113e986, reason: Instance 4d8c4b3b-58c2-4d3d-863c-49b98333b84d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.760 2 DEBUG nova.network.neutron [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.805 2 INFO nova.compute.manager [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 4.07 seconds to deallocate network for instance.#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.865 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.866 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:34 np0005466030 nova_compute[230518]: 2025-10-02 12:41:34.941 2 DEBUG oslo_concurrency.processutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/913548972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:35 np0005466030 nova_compute[230518]: 2025-10-02 12:41:35.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:35 np0005466030 nova_compute[230518]: 2025-10-02 12:41:35.355 2 DEBUG oslo_concurrency.processutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:35 np0005466030 nova_compute[230518]: 2025-10-02 12:41:35.364 2 DEBUG nova.compute.provider_tree [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:35.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:35 np0005466030 nova_compute[230518]: 2025-10-02 12:41:35.388 2 DEBUG nova.scheduler.client.report [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:35 np0005466030 nova_compute[230518]: 2025-10-02 12:41:35.450 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:35 np0005466030 nova_compute[230518]: 2025-10-02 12:41:35.483 2 INFO nova.scheduler.client.report [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Deleted allocations for instance 4d8c4b3b-58c2-4d3d-863c-49b98333b84d#033[00m
Oct  2 08:41:35 np0005466030 nova_compute[230518]: 2025-10-02 12:41:35.603 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:36 np0005466030 nova_compute[230518]: 2025-10-02 12:41:36.493 2 DEBUG nova.compute.manager [req-e7e8dd66-dedf-4f02-84b9-fa901d803602 req-4bf538d8-9ed5-45ac-82c9-4dade05af019 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-deleted-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:37.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:38 np0005466030 nova_compute[230518]: 2025-10-02 12:41:38.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:38 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:38Z|00462|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:41:38 np0005466030 nova_compute[230518]: 2025-10-02 12:41:38.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:38 np0005466030 nova_compute[230518]: 2025-10-02 12:41:38.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:41:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:41:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:39.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:41.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:41.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:43 np0005466030 nova_compute[230518]: 2025-10-02 12:41:43.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:43.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:43.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:43 np0005466030 nova_compute[230518]: 2025-10-02 12:41:43.533 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408888.5321271, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:43 np0005466030 nova_compute[230518]: 2025-10-02 12:41:43.533 2 INFO nova.compute.manager [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:41:43 np0005466030 nova_compute[230518]: 2025-10-02 12:41:43.671 2 DEBUG nova.compute.manager [None req-ce453aec-8cef-48f3-aa30-59570bb4cc74 - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:43 np0005466030 nova_compute[230518]: 2025-10-02 12:41:43.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:44 np0005466030 nova_compute[230518]: 2025-10-02 12:41:44.057 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:44 np0005466030 nova_compute[230518]: 2025-10-02 12:41:44.058 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:44 np0005466030 nova_compute[230518]: 2025-10-02 12:41:44.332 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:41:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:45.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:45.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:45 np0005466030 nova_compute[230518]: 2025-10-02 12:41:45.487 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:45 np0005466030 nova_compute[230518]: 2025-10-02 12:41:45.488 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:45 np0005466030 nova_compute[230518]: 2025-10-02 12:41:45.498 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:41:45 np0005466030 nova_compute[230518]: 2025-10-02 12:41:45.499 2 INFO nova.compute.claims [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:41:45 np0005466030 podman[273482]: 2025-10-02 12:41:45.851141418 +0000 UTC m=+0.087944066 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  2 08:41:45 np0005466030 podman[273483]: 2025-10-02 12:41:45.85342398 +0000 UTC m=+0.092517901 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:41:46 np0005466030 nova_compute[230518]: 2025-10-02 12:41:46.401 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1208385054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:46 np0005466030 nova_compute[230518]: 2025-10-02 12:41:46.859 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:46 np0005466030 nova_compute[230518]: 2025-10-02 12:41:46.865 2 DEBUG nova.compute.provider_tree [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:46 np0005466030 nova_compute[230518]: 2025-10-02 12:41:46.990 2 DEBUG nova.scheduler.client.report [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:47 np0005466030 nova_compute[230518]: 2025-10-02 12:41:47.222 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:47 np0005466030 nova_compute[230518]: 2025-10-02 12:41:47.223 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:41:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:47.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:47.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:47 np0005466030 nova_compute[230518]: 2025-10-02 12:41:47.472 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:41:47 np0005466030 nova_compute[230518]: 2025-10-02 12:41:47.472 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:41:47 np0005466030 nova_compute[230518]: 2025-10-02 12:41:47.595 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:41:47 np0005466030 nova_compute[230518]: 2025-10-02 12:41:47.708 2 DEBUG nova.policy [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3151966e941f4652ba984616bfa760c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:41:47 np0005466030 nova_compute[230518]: 2025-10-02 12:41:47.790 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.111 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.112 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.113 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating image(s)#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.156 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.196 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.236 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.240 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.304 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.305 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.305 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.306 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.343 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.349 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:48 np0005466030 nova_compute[230518]: 2025-10-02 12:41:48.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:48Z|00463|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.130 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Successfully created port: 8eb9e971-5920-4103-9ba9-c0846182952d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.164 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.815s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.234 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] resizing rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:41:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:49.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.426 2 DEBUG nova.objects.instance [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'migration_context' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.531 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.532 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Ensure instance console log exists: /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.532 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.532 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:49 np0005466030 nova_compute[230518]: 2025-10-02 12:41:49.533 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.350 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Successfully updated port: 8eb9e971-5920-4103-9ba9-c0846182952d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.373 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.374 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.374 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.629 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.831 2 DEBUG nova.compute.manager [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.831 2 DEBUG nova.compute.manager [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing instance network info cache due to event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:41:50 np0005466030 nova_compute[230518]: 2025-10-02 12:41:50.831 2 DEBUG oslo_concurrency.lockutils [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:41:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:51.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:51.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.794 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.823 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.824 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance network_info: |[{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.824 2 DEBUG oslo_concurrency.lockutils [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.825 2 DEBUG nova.network.neutron [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.828 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start _get_guest_xml network_info=[{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.832 2 WARNING nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.839 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.840 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.846 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.847 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.848 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.848 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:41:51 np0005466030 nova_compute[230518]: 2025-10-02 12:41:51.853 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/281551588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.289 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.312 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.316 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:41:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4130622140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.743 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.744 2 DEBUG nova.virt.libvirt.vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMPES/J98pyyGI+xC972/PJIY+D7X9SqOgh45Z1MPKQ6L1b0LXV7IORHQBCxCHGOsCWQssLDPZp4WJ8irI2AsYuAH5MVzTXEt9QIB2bOJQbGultCK6n77bAruhlsubzH7w==',key_name='tempest-keypair-372158786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.745 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.746 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.747 2 DEBUG nova.objects.instance [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.773 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <uuid>1f9101c6-f4d8-46c7-8884-386f9f08e6fb</uuid>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <name>instance-00000070</name>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-1408399936</nova:name>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:41:51</nova:creationTime>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:user uuid="3151966e941f4652ba984616bfa760c7">tempest-AttachVolumeShelveTestJSON-1943710095-project-member</nova:user>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:project uuid="f7e2edef094b4ba5a56a5ec5ffce911e">tempest-AttachVolumeShelveTestJSON-1943710095</nova:project>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <nova:port uuid="8eb9e971-5920-4103-9ba9-c0846182952d">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <entry name="serial">1f9101c6-f4d8-46c7-8884-386f9f08e6fb</entry>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <entry name="uuid">1f9101c6-f4d8-46c7-8884-386f9f08e6fb</entry>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:43:40:72"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <target dev="tap8eb9e971-59"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/console.log" append="off"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:41:52 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:41:52 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:41:52 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:41:52 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.774 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Preparing to wait for external event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.775 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.775 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.776 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.777 2 DEBUG nova.virt.libvirt.vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMPES/J98pyyGI+xC972/PJIY+D7X9SqOgh45Z1MPKQ6L1b0LXV7IORHQBCxCHGOsCWQssLDPZp4WJ8irI2AsYuAH5MVzTXEt9QIB2bOJQbGultCK6n77bAruhlsubzH7w==',key_name='tempest-keypair-372158786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.778 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.779 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.779 2 DEBUG os_vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.781 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8eb9e971-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8eb9e971-59, col_values=(('external_ids', {'iface-id': '8eb9e971-5920-4103-9ba9-c0846182952d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:40:72', 'vm-uuid': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:52 np0005466030 NetworkManager[44960]: <info>  [1759408912.7897] manager: (tap8eb9e971-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.795 2 INFO os_vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59')#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.891 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.892 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.892 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No VIF found with MAC fa:16:3e:43:40:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.893 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Using config drive#033[00m
Oct  2 08:41:52 np0005466030 nova_compute[230518]: 2025-10-02 12:41:52.926 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.310 2 DEBUG nova.network.neutron [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updated VIF entry in instance network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.311 2 DEBUG nova.network.neutron [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.328 2 DEBUG oslo_concurrency.lockutils [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.351 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating config drive at /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.363 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfi9arr2f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.514 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfi9arr2f" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.561 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:41:53 np0005466030 nova_compute[230518]: 2025-10-02 12:41:53.567 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.024 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.024 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deleting local config drive /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config because it was imported into RBD.#033[00m
Oct  2 08:41:55 np0005466030 kernel: tap8eb9e971-59: entered promiscuous mode
Oct  2 08:41:55 np0005466030 NetworkManager[44960]: <info>  [1759408915.0772] manager: (tap8eb9e971-59): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Oct  2 08:41:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:55Z|00464|binding|INFO|Claiming lport 8eb9e971-5920-4103-9ba9-c0846182952d for this chassis.
Oct  2 08:41:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:55Z|00465|binding|INFO|8eb9e971-5920-4103-9ba9-c0846182952d: Claiming fa:16:3e:43:40:72 10.100.0.10
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.086 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:40:72 10.100.0.10'], port_security=['fa:16:3e:43:40:72 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '041f6b5e-0e14-4ae5-9597-3a584e6f87e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8eb9e971-5920-4103-9ba9-c0846182952d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.089 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8eb9e971-5920-4103-9ba9-c0846182952d in datapath 385a384c-5df0-4b04-b928-517a46df04f4 bound to our chassis#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.093 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 385a384c-5df0-4b04-b928-517a46df04f4#033[00m
Oct  2 08:41:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:55Z|00466|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d up in Southbound
Oct  2 08:41:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:55Z|00467|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d ovn-installed in OVS
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.105 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56260993-5067-498b-b413-b2a26cc02d54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.106 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap385a384c-51 in ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.108 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap385a384c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bf99257a-226c-475c-b630-8806c3235eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a759b20d-daa0-4c64-bc2f-1b9befb2ec6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 systemd-machined[188247]: New machine qemu-55-instance-00000070.
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.124 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee58962-c6a8-412e-87eb-43869423f8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 systemd[1]: Started Virtual Machine qemu-55-instance-00000070.
Oct  2 08:41:55 np0005466030 systemd-udevd[273849]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.148 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a707992d-bed5-4dfb-90b6-2bd86759c55e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 NetworkManager[44960]: <info>  [1759408915.1531] device (tap8eb9e971-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:41:55 np0005466030 NetworkManager[44960]: <info>  [1759408915.1542] device (tap8eb9e971-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.177 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1b61074f-db69-4e7e-9f52-ef2b520b1078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 NetworkManager[44960]: <info>  [1759408915.1832] manager: (tap385a384c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Oct  2 08:41:55 np0005466030 systemd-udevd[273854]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.184 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7312bba2-a12c-4df9-962e-240580b9f695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.234 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5f315c4b-15be-4675-97c0-79d1f8700ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.237 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ee71b434-9752-4a6d-9570-c72536a869fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 NetworkManager[44960]: <info>  [1759408915.2650] device (tap385a384c-50): carrier: link connected
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.274 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f3d080-aaa6-4a31-882e-9fbc42361eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.293 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ead865a9-db99-4a20-8ee1-c361832a7e41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677882, 'reachable_time': 32970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273879, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.308 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[beace7bd-caad-435f-b50d-bff5b6cf12f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:d461'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677882, 'tstamp': 677882}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273880, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.326 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d479b16-b35c-407c-ab59-45643434244c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677882, 'reachable_time': 32970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273881, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.367 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9797885a-1506-4d7a-af3f-3efcfe701ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:41:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:55 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.454 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[457b6dea-48b2-48f2-a424-f3a150b634a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.456 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.457 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.458 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385a384c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005466030 kernel: tap385a384c-50: entered promiscuous mode
Oct  2 08:41:55 np0005466030 NetworkManager[44960]: <info>  [1759408915.4614] manager: (tap385a384c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.464 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap385a384c-50, col_values=(('external_ids', {'iface-id': '12496c3c-f50d-4104-bfb7-81f1aa24617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005466030 ovn_controller[129257]: 2025-10-02T12:41:55Z|00468|binding|INFO|Releasing lport 12496c3c-f50d-4104-bfb7-81f1aa24617e from this chassis (sb_readonly=0)
Oct  2 08:41:55 np0005466030 nova_compute[230518]: 2025-10-02 12:41:55.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.489 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.490 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e07c97b-df85-480c-9188-e9a3ee69de56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.491 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-385a384c-5df0-4b04-b928-517a46df04f4
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 385a384c-5df0-4b04-b928-517a46df04f4
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:41:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.492 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'env', 'PROCESS_TAG=haproxy-385a384c-5df0-4b04-b928-517a46df04f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/385a384c-5df0-4b04-b928-517a46df04f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:41:55 np0005466030 podman[273913]: 2025-10-02 12:41:55.9109901 +0000 UTC m=+0.061379231 container create 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:41:55 np0005466030 systemd[1]: Started libpod-conmon-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0.scope.
Oct  2 08:41:55 np0005466030 podman[273913]: 2025-10-02 12:41:55.87824678 +0000 UTC m=+0.028635931 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:41:56 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:41:56 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f06bb73ff4fd07214bf62b4ab474c808902a8097ac5ccfb1095b1198b7d0666/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:41:56 np0005466030 podman[273913]: 2025-10-02 12:41:56.162702485 +0000 UTC m=+0.313091666 container init 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:41:56 np0005466030 podman[273913]: 2025-10-02 12:41:56.17111903 +0000 UTC m=+0.321508161 container start 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:41:56 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : New worker (273976) forked
Oct  2 08:41:56 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : Loading success.
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.641 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408916.6401796, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.641 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Started (Lifecycle Event)#033[00m
Oct  2 08:41:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.669 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.673 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408916.6404648, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.673 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.693 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.696 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:56 np0005466030 nova_compute[230518]: 2025-10-02 12:41:56.713 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.115 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.614 2 DEBUG nova.compute.manager [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.615 2 DEBUG oslo_concurrency.lockutils [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.615 2 DEBUG oslo_concurrency.lockutils [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.616 2 DEBUG oslo_concurrency.lockutils [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.616 2 DEBUG nova.compute.manager [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Processing event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.617 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.621 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.623 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408917.6226275, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.623 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.628 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance spawned successfully.#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.629 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.647 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.652 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.652 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.653 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.654 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.654 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.655 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.660 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.691 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.718 2 INFO nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Took 9.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.719 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.779 2 INFO nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Took 13.19 seconds to build instance.#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:57 np0005466030 nova_compute[230518]: 2025-10-02 12:41:57.816 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3482794670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.566 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.631 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.631 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.634 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.635 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.804 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.805 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4167MB free_disk=20.69430923461914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.805 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.806 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.916 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.917 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.917 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.917 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:41:58 np0005466030 nova_compute[230518]: 2025-10-02 12:41:58.987 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:41:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:41:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1881695082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:41:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:41:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:41:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:41:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:41:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:41:59 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.430 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.436 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.458 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.506 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.507 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.713 2 DEBUG nova.compute.manager [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.713 2 DEBUG oslo_concurrency.lockutils [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.714 2 DEBUG oslo_concurrency.lockutils [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.714 2 DEBUG oslo_concurrency.lockutils [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.714 2 DEBUG nova.compute.manager [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:41:59 np0005466030 nova_compute[230518]: 2025-10-02 12:41:59.715 2 WARNING nova.compute.manager [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state active and task_state None.#033[00m
Oct  2 08:42:00 np0005466030 nova_compute[230518]: 2025-10-02 12:42:00.345 2 DEBUG nova.compute.manager [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:00 np0005466030 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG nova.compute.manager [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing instance network info cache due to event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:00 np0005466030 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG oslo_concurrency.lockutils [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:00 np0005466030 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG oslo_concurrency.lockutils [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:00 np0005466030 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG nova.network.neutron [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:00 np0005466030 nova_compute[230518]: 2025-10-02 12:42:00.503 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:01 np0005466030 nova_compute[230518]: 2025-10-02 12:42:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:01 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:01 np0005466030 nova_compute[230518]: 2025-10-02 12:42:01.870 2 DEBUG nova.network.neutron [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updated VIF entry in instance network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:01 np0005466030 nova_compute[230518]: 2025-10-02 12:42:01.870 2 DEBUG nova.network.neutron [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:01 np0005466030 nova_compute[230518]: 2025-10-02 12:42:01.893 2 DEBUG oslo_concurrency.lockutils [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:01.901 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:01 np0005466030 nova_compute[230518]: 2025-10-02 12:42:01.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:01.903 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:42:02 np0005466030 nova_compute[230518]: 2025-10-02 12:42:02.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:03 np0005466030 nova_compute[230518]: 2025-10-02 12:42:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:03 np0005466030 nova_compute[230518]: 2025-10-02 12:42:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:03 np0005466030 nova_compute[230518]: 2025-10-02 12:42:03.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:03.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:03.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:04 np0005466030 nova_compute[230518]: 2025-10-02 12:42:04.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:04 np0005466030 podman[274032]: 2025-10-02 12:42:04.852033899 +0000 UTC m=+0.089001129 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:42:04 np0005466030 podman[274031]: 2025-10-02 12:42:04.896822897 +0000 UTC m=+0.134691246 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:42:05 np0005466030 nova_compute[230518]: 2025-10-02 12:42:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:05 np0005466030 nova_compute[230518]: 2025-10-02 12:42:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:42:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:05.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:05 np0005466030 nova_compute[230518]: 2025-10-02 12:42:05.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:07 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:07.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:07 np0005466030 nova_compute[230518]: 2025-10-02 12:42:07.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.307 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.307 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:08 np0005466030 nova_compute[230518]: 2025-10-02 12:42:08.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:08.905 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:09.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.315 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.316 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.346 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.383 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.400 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.401 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.410 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.410 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.415 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.415 2 INFO nova.compute.claims [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:42:10 np0005466030 nova_compute[230518]: 2025-10-02 12:42:10.586 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2821617500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.101 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.109 2 DEBUG nova.compute.provider_tree [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.126 2 DEBUG nova.scheduler.client.report [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.146 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.147 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.185 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.186 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.210 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.243 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.321 2 DEBUG nova.policy [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.369 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.370 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.371 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Creating image(s)#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.402 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:11 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.441 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.469 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.473 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.536 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.537 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.538 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.538 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.564 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:11 np0005466030 nova_compute[230518]: 2025-10-02 12:42:11.567 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2d140186-e66c-4d55-b8df-5bb4214206d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:12 np0005466030 nova_compute[230518]: 2025-10-02 12:42:12.341 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Successfully created port: 76dd1a78-6a32-43c7-8633-51573580bfc9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:42:12 np0005466030 nova_compute[230518]: 2025-10-02 12:42:12.395 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:12 np0005466030 nova_compute[230518]: 2025-10-02 12:42:12.579 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2d140186-e66c-4d55-b8df-5bb4214206d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:12 np0005466030 nova_compute[230518]: 2025-10-02 12:42:12.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:12 np0005466030 nova_compute[230518]: 2025-10-02 12:42:12.866 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.205 2 DEBUG nova.objects.instance [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 2d140186-e66c-4d55-b8df-5bb4214206d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.221 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.222 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Ensure instance console log exists: /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.222 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.223 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.223 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.312 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Successfully updated port: 76dd1a78-6a32-43c7-8633-51573580bfc9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.340 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.340 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.340 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:42:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:13 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.444 2 DEBUG nova.compute.manager [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-changed-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.444 2 DEBUG nova.compute.manager [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Refreshing instance network info cache due to event network-changed-76dd1a78-6a32-43c7-8633-51573580bfc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.445 2 DEBUG oslo_concurrency.lockutils [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:13 np0005466030 nova_compute[230518]: 2025-10-02 12:42:13.483 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:42:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:13Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:40:72 10.100.0.10
Oct  2 08:42:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:13Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:40:72 10.100.0.10
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.396 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.432 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.433 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance network_info: |[{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.434 2 DEBUG oslo_concurrency.lockutils [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.434 2 DEBUG nova.network.neutron [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Refreshing network info cache for port 76dd1a78-6a32-43c7-8633-51573580bfc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.440 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start _get_guest_xml network_info=[{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.446 2 WARNING nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.453 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.454 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.457 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.458 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.460 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.461 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.462 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.462 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.463 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.463 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.464 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.464 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.465 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.466 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.467 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.467 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.473 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:42:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1654915291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.919 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.951 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:14 np0005466030 nova_compute[230518]: 2025-10-02 12:42:14.956 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:42:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4124274956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.433 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.435 2 DEBUG nova.virt.libvirt.vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-225030278',display_name='tempest-DeleteServersTestJSON-server-225030278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-225030278',id=113,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-ao62a00r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:11Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2d140186-e66c-4d55-b8df-5bb4214206d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.435 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.437 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:15 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:15.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.438 2 DEBUG nova.objects.instance [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d140186-e66c-4d55-b8df-5bb4214206d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.468 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <uuid>2d140186-e66c-4d55-b8df-5bb4214206d7</uuid>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <name>instance-00000071</name>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <nova:name>tempest-DeleteServersTestJSON-server-225030278</nova:name>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:42:14</nova:creationTime>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <nova:port uuid="76dd1a78-6a32-43c7-8633-51573580bfc9">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <entry name="serial">2d140186-e66c-4d55-b8df-5bb4214206d7</entry>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <entry name="uuid">2d140186-e66c-4d55-b8df-5bb4214206d7</entry>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/2d140186-e66c-4d55-b8df-5bb4214206d7_disk">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:53:d1:89"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <target dev="tap76dd1a78-6a"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/console.log" append="off"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:42:15 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:42:15 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:42:15 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:42:15 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Preparing to wait for external event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.470 2 DEBUG nova.virt.libvirt.vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-225030278',display_name='tempest-DeleteServersTestJSON-server-225030278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-225030278',id=113,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-ao62a00r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:11Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2d140186-e66c-4d55-b8df-5bb4214206d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.470 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.471 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.471 2 DEBUG os_vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76dd1a78-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76dd1a78-6a, col_values=(('external_ids', {'iface-id': '76dd1a78-6a32-43c7-8633-51573580bfc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:d1:89', 'vm-uuid': '2d140186-e66c-4d55-b8df-5bb4214206d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:15 np0005466030 NetworkManager[44960]: <info>  [1759408935.4831] manager: (tap76dd1a78-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.492 2 INFO os_vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a')#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.604 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.605 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.605 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:53:d1:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.607 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Using config drive#033[00m
Oct  2 08:42:15 np0005466030 nova_compute[230518]: 2025-10-02 12:42:15.641 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.171 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Creating config drive at /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config#033[00m
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.186 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3te3f5ub execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.331 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3te3f5ub" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.374 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.378 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:16 np0005466030 podman[274382]: 2025-10-02 12:42:16.845660234 +0000 UTC m=+0.069897839 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:16 np0005466030 podman[274383]: 2025-10-02 12:42:16.845643283 +0000 UTC m=+0.070466877 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.876 2 DEBUG nova.network.neutron [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updated VIF entry in instance network info cache for port 76dd1a78-6a32-43c7-8633-51573580bfc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.877 2 DEBUG nova.network.neutron [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:16 np0005466030 nova_compute[230518]: 2025-10-02 12:42:16.904 2 DEBUG oslo_concurrency.lockutils [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.424 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.425 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deleting local config drive /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config because it was imported into RBD.#033[00m
Oct  2 08:42:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:17 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:17.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:17 np0005466030 kernel: tap76dd1a78-6a: entered promiscuous mode
Oct  2 08:42:17 np0005466030 NetworkManager[44960]: <info>  [1759408937.4903] manager: (tap76dd1a78-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:17Z|00469|binding|INFO|Claiming lport 76dd1a78-6a32-43c7-8633-51573580bfc9 for this chassis.
Oct  2 08:42:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:17Z|00470|binding|INFO|76dd1a78-6a32-43c7-8633-51573580bfc9: Claiming fa:16:3e:53:d1:89 10.100.0.10
Oct  2 08:42:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:17Z|00471|binding|INFO|Setting lport 76dd1a78-6a32-43c7-8633-51573580bfc9 ovn-installed in OVS
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:17 np0005466030 systemd-udevd[274432]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:42:17 np0005466030 NetworkManager[44960]: <info>  [1759408937.5423] device (tap76dd1a78-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:42:17 np0005466030 NetworkManager[44960]: <info>  [1759408937.5443] device (tap76dd1a78-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:42:17 np0005466030 systemd-machined[188247]: New machine qemu-56-instance-00000071.
Oct  2 08:42:17 np0005466030 systemd[1]: Started Virtual Machine qemu-56-instance-00000071.
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.565 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d1:89 10.100.0.10'], port_security=['fa:16:3e:53:d1:89 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d140186-e66c-4d55-b8df-5bb4214206d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=76dd1a78-6a32-43c7-8633-51573580bfc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:17Z|00472|binding|INFO|Setting lport 76dd1a78-6a32-43c7-8633-51573580bfc9 up in Southbound
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.566 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 76dd1a78-6a32-43c7-8633-51573580bfc9 in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.568 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.578 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[75a73259-a3d1-4d35-98af-3efab4f236b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.580 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.581 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.581 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[abbede67-ceff-4299-aae5-dd5c3077434d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.582 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dd92ea71-bd0d-4a32-914b-a647f3a74fd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.595 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[96a571d1-c3f7-49cb-b99b-26445a130f5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.611 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8cecb753-f805-4fb4-8f20-3bbb26f97bc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.647 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2b975e65-cf0a-4bc2-b3a0-627f6b1c06f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 NetworkManager[44960]: <info>  [1759408937.6534] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.655 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c46a747-59f9-4489-8f5f-96267b17a97b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.690 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[084d5906-5fd1-4a8b-8b0c-877ecd5912af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.693 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fd4508-b6d4-49ed-9a7b-f2a3cd64f0ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 NetworkManager[44960]: <info>  [1759408937.7136] device (tapfd4432c5-b0): carrier: link connected
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.718 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8db8357a-7622-49f7-a5d5-baa2a51dd0d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.736 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4323d6b8-f84f-4691-b2ee-8e860e88c1fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680127, 'reachable_time': 31841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274468, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.750 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cbea308a-d2e2-4d82-abbc-3abf513dcc48]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680127, 'tstamp': 680127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274469, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.770 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbdee6c-6ef5-4d23-bd05-c34dc7c6d08e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680127, 'reachable_time': 31841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274470, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.798 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1824d677-df67-497a-8a52-f407ddae0575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.865 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a23a67-a535-4e71-9adb-bb20084f852a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.867 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.867 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.868 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:17 np0005466030 NetworkManager[44960]: <info>  [1759408937.8703] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Oct  2 08:42:17 np0005466030 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.872 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:17Z|00473|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct  2 08:42:17 np0005466030 nova_compute[230518]: 2025-10-02 12:42:17.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.889 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.890 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12d0c2cb-77b1-433f-98e0-defec1c63264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.890 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:42:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.892 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:18 np0005466030 podman[274544]: 2025-10-02 12:42:18.273733699 +0000 UTC m=+0.029781028 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:42:18 np0005466030 podman[274544]: 2025-10-02 12:42:18.577532621 +0000 UTC m=+0.333579990 container create 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.622 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408938.6219187, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.622 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Started (Lifecycle Event)#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.626 2 DEBUG nova.compute.manager [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG oslo_concurrency.lockutils [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG oslo_concurrency.lockutils [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG oslo_concurrency.lockutils [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG nova.compute.manager [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Processing event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.628 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.631 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.634 2 INFO nova.virt.libvirt.driver [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance spawned successfully.#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.634 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.690 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.694 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.709 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.710 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.710 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.710 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.711 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.711 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.744 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.744 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408938.624742, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:18 np0005466030 nova_compute[230518]: 2025-10-02 12:42:18.744 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:42:18 np0005466030 systemd[1]: Started libpod-conmon-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7.scope.
Oct  2 08:42:18 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:42:18 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4e0803a26e8010599196fbef18d640c314c793171bc446bcaa2ed335a9319d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:42:18 np0005466030 podman[274544]: 2025-10-02 12:42:18.940113592 +0000 UTC m=+0.696160921 container init 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:42:18 np0005466030 podman[274544]: 2025-10-02 12:42:18.949180588 +0000 UTC m=+0.705227907 container start 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 08:42:18 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : New worker (274566) forked
Oct  2 08:42:18 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : Loading success.
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.091 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.097 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408938.6313567, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.098 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.272 2 INFO nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 7.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.273 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.285 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.288 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.356 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.412 2 INFO nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 9.02 seconds to build instance.#033[00m
Oct  2 08:42:19 np0005466030 nova_compute[230518]: 2025-10-02 12:42:19.441 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:19.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.229 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.229 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.230 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.230 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.230 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.231 2 INFO nova.compute.manager [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Terminating instance#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.231 2 DEBUG nova.compute.manager [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 kernel: tap76dd1a78-6a (unregistering): left promiscuous mode
Oct  2 08:42:20 np0005466030 NetworkManager[44960]: <info>  [1759408940.5579] device (tap76dd1a78-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:42:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:20Z|00474|binding|INFO|Releasing lport 76dd1a78-6a32-43c7-8633-51573580bfc9 from this chassis (sb_readonly=0)
Oct  2 08:42:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:20Z|00475|binding|INFO|Setting lport 76dd1a78-6a32-43c7-8633-51573580bfc9 down in Southbound
Oct  2 08:42:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:20Z|00476|binding|INFO|Removing iface tap76dd1a78-6a ovn-installed in OVS
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.603 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d1:89 10.100.0.10'], port_security=['fa:16:3e:53:d1:89 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d140186-e66c-4d55-b8df-5bb4214206d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=76dd1a78-6a32-43c7-8633-51573580bfc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.604 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 76dd1a78-6a32-43c7-8633-51573580bfc9 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis#033[00m
Oct  2 08:42:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.606 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:42:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.607 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1802ada-ce38-42a1-bdd1-0c22419a6c5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.607 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore#033[00m
Oct  2 08:42:20 np0005466030 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct  2 08:42:20 np0005466030 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000071.scope: Consumed 2.507s CPU time.
Oct  2 08:42:20 np0005466030 systemd-machined[188247]: Machine qemu-56-instance-00000071 terminated.
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.667 2 INFO nova.virt.libvirt.driver [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance destroyed successfully.#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.668 2 DEBUG nova.objects.instance [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 2d140186-e66c-4d55-b8df-5bb4214206d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.722 2 DEBUG nova.compute.manager [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.722 2 DEBUG oslo_concurrency.lockutils [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.722 2 DEBUG oslo_concurrency.lockutils [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.723 2 DEBUG oslo_concurrency.lockutils [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.723 2 DEBUG nova.compute.manager [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] No waiting events found dispatching network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.723 2 WARNING nova.compute.manager [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received unexpected event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.724 2 DEBUG nova.virt.libvirt.vif [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-225030278',display_name='tempest-DeleteServersTestJSON-server-225030278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-225030278',id=113,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-ao62a00r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:19Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2d140186-e66c-4d55-b8df-5bb4214206d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.725 2 DEBUG nova.network.os_vif_util [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.725 2 DEBUG nova.network.os_vif_util [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.725 2 DEBUG os_vif [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76dd1a78-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:42:20 np0005466030 nova_compute[230518]: 2025-10-02 12:42:20.733 2 INFO os_vif [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a')#033[00m
Oct  2 08:42:20 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : haproxy version is 2.8.14-c23fe91
Oct  2 08:42:20 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : path to executable is /usr/sbin/haproxy
Oct  2 08:42:20 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [WARNING]  (274564) : Exiting Master process...
Oct  2 08:42:20 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [ALERT]    (274564) : Current worker (274566) exited with code 143 (Terminated)
Oct  2 08:42:20 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [WARNING]  (274564) : All workers exited. Exiting... (0)
Oct  2 08:42:20 np0005466030 systemd[1]: libpod-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7.scope: Deactivated successfully.
Oct  2 08:42:20 np0005466030 podman[274605]: 2025-10-02 12:42:20.82748971 +0000 UTC m=+0.135450820 container died 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:42:21 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7-userdata-shm.mount: Deactivated successfully.
Oct  2 08:42:21 np0005466030 systemd[1]: var-lib-containers-storage-overlay-ac4e0803a26e8010599196fbef18d640c314c793171bc446bcaa2ed335a9319d-merged.mount: Deactivated successfully.
Oct  2 08:42:21 np0005466030 podman[274605]: 2025-10-02 12:42:21.116044844 +0000 UTC m=+0.424005964 container cleanup 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:42:21 np0005466030 systemd[1]: libpod-conmon-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7.scope: Deactivated successfully.
Oct  2 08:42:21 np0005466030 podman[274654]: 2025-10-02 12:42:21.268042303 +0000 UTC m=+0.126040264 container remove 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.275 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef7e5ba-306c-4f06-b583-230d3f2f41ce]: (4, ('Thu Oct  2 12:42:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7)\n145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7\nThu Oct  2 12:42:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7)\n145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.277 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9cfc55-91a2-4e94-ab75-078c16ec55a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.278 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:21 np0005466030 nova_compute[230518]: 2025-10-02 12:42:21.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:21 np0005466030 kernel: tapfd4432c5-b0: left promiscuous mode
Oct  2 08:42:21 np0005466030 nova_compute[230518]: 2025-10-02 12:42:21.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.336 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2fec6b61-a647-459a-9e95-b7047fb41398]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:21 np0005466030 nova_compute[230518]: 2025-10-02 12:42:21.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.364 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3baa60e5-98d1-4f0f-961f-2a56a3c4742f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.365 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c24be682-42a2-4414-8d4b-d7103da11619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.380 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93f21ff8-c96a-4672-aa7e-f04ced098281]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680120, 'reachable_time': 34262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274669, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.382 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:42:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.382 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[22ba385a-a07c-4d59-94d5-b13e8ebe6b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:21 np0005466030 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct  2 08:42:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:21.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:21 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.339 2 INFO nova.virt.libvirt.driver [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deleting instance files /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7_del#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.339 2 INFO nova.virt.libvirt.driver [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deletion of /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7_del complete#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.385 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-unplugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.385 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] No waiting events found dispatching network-vif-unplugged-76dd1a78-6a32-43c7-8633-51573580bfc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-unplugged-76dd1a78-6a32-43c7-8633-51573580bfc9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] No waiting events found dispatching network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:23 np0005466030 nova_compute[230518]: 2025-10-02 12:42:23.388 2 WARNING nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received unexpected event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:42:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:24 np0005466030 nova_compute[230518]: 2025-10-02 12:42:24.122 2 INFO nova.compute.manager [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 3.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:42:24 np0005466030 nova_compute[230518]: 2025-10-02 12:42:24.123 2 DEBUG oslo.service.loopingcall [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:42:24 np0005466030 nova_compute[230518]: 2025-10-02 12:42:24.124 2 DEBUG nova.compute.manager [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:42:24 np0005466030 nova_compute[230518]: 2025-10-02 12:42:24.124 2 DEBUG nova.network.neutron [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:42:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:25 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:25 np0005466030 nova_compute[230518]: 2025-10-02 12:42:25.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:25.939 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:25.941 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:27 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:27 np0005466030 nova_compute[230518]: 2025-10-02 12:42:27.559 2 DEBUG nova.network.neutron [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:27 np0005466030 nova_compute[230518]: 2025-10-02 12:42:27.641 2 DEBUG nova.compute.manager [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-deleted-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:27 np0005466030 nova_compute[230518]: 2025-10-02 12:42:27.641 2 INFO nova.compute.manager [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Neutron deleted interface 76dd1a78-6a32-43c7-8633-51573580bfc9; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:42:27 np0005466030 nova_compute[230518]: 2025-10-02 12:42:27.641 2 DEBUG nova.network.neutron [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:28 np0005466030 nova_compute[230518]: 2025-10-02 12:42:28.004 2 INFO nova.compute.manager [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 3.88 seconds to deallocate network for instance.#033[00m
Oct  2 08:42:28 np0005466030 nova_compute[230518]: 2025-10-02 12:42:28.013 2 DEBUG nova.compute.manager [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Detach interface failed, port_id=76dd1a78-6a32-43c7-8633-51573580bfc9, reason: Instance 2d140186-e66c-4d55-b8df-5bb4214206d7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:42:28 np0005466030 nova_compute[230518]: 2025-10-02 12:42:28.238 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:28 np0005466030 nova_compute[230518]: 2025-10-02 12:42:28.239 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:28 np0005466030 nova_compute[230518]: 2025-10-02 12:42:28.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:28 np0005466030 nova_compute[230518]: 2025-10-02 12:42:28.756 2 DEBUG oslo_concurrency.processutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/104167137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:29 np0005466030 nova_compute[230518]: 2025-10-02 12:42:29.222 2 DEBUG oslo_concurrency.processutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:29 np0005466030 nova_compute[230518]: 2025-10-02 12:42:29.231 2 DEBUG nova.compute.provider_tree [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:29 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:29 np0005466030 nova_compute[230518]: 2025-10-02 12:42:29.686 2 DEBUG nova.scheduler.client.report [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:30 np0005466030 nova_compute[230518]: 2025-10-02 12:42:30.064 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:30 np0005466030 nova_compute[230518]: 2025-10-02 12:42:30.554 2 INFO nova.scheduler.client.report [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 2d140186-e66c-4d55-b8df-5bb4214206d7#033[00m
Oct  2 08:42:30 np0005466030 nova_compute[230518]: 2025-10-02 12:42:30.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:30 np0005466030 nova_compute[230518]: 2025-10-02 12:42:30.990 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Oct  2 08:42:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:31 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:42:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:42:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:42:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:33 np0005466030 nova_compute[230518]: 2025-10-02 12:42:33.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:33 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:34 np0005466030 nova_compute[230518]: 2025-10-02 12:42:34.159 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:34 np0005466030 nova_compute[230518]: 2025-10-02 12:42:34.159 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:34 np0005466030 nova_compute[230518]: 2025-10-02 12:42:34.160 2 INFO nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Shelving#033[00m
Oct  2 08:42:34 np0005466030 nova_compute[230518]: 2025-10-02 12:42:34.187 2 DEBUG nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:42:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:42:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:42:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:42:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:42:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:35 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:35 np0005466030 nova_compute[230518]: 2025-10-02 12:42:35.666 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408940.6654503, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:35 np0005466030 nova_compute[230518]: 2025-10-02 12:42:35.667 2 INFO nova.compute.manager [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:42:35 np0005466030 nova_compute[230518]: 2025-10-02 12:42:35.683 2 DEBUG nova.compute.manager [None req-928ec4c7-e91f-40c7-ad2c-1cc810f87b65 - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:35 np0005466030 nova_compute[230518]: 2025-10-02 12:42:35.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:35 np0005466030 podman[274825]: 2025-10-02 12:42:35.792666843 +0000 UTC m=+0.045775320 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:42:35 np0005466030 podman[274824]: 2025-10-02 12:42:35.826099315 +0000 UTC m=+0.080666248 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:42:36 np0005466030 kernel: tap8eb9e971-59 (unregistering): left promiscuous mode
Oct  2 08:42:36 np0005466030 NetworkManager[44960]: <info>  [1759408956.7228] device (tap8eb9e971-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:42:36 np0005466030 nova_compute[230518]: 2025-10-02 12:42:36.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:36Z|00477|binding|INFO|Releasing lport 8eb9e971-5920-4103-9ba9-c0846182952d from this chassis (sb_readonly=0)
Oct  2 08:42:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:36Z|00478|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d down in Southbound
Oct  2 08:42:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:42:36Z|00479|binding|INFO|Removing iface tap8eb9e971-59 ovn-installed in OVS
Oct  2 08:42:36 np0005466030 nova_compute[230518]: 2025-10-02 12:42:36.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.740 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:40:72 10.100.0.10'], port_security=['fa:16:3e:43:40:72 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '041f6b5e-0e14-4ae5-9597-3a584e6f87e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8eb9e971-5920-4103-9ba9-c0846182952d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:42:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.741 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8eb9e971-5920-4103-9ba9-c0846182952d in datapath 385a384c-5df0-4b04-b928-517a46df04f4 unbound from our chassis#033[00m
Oct  2 08:42:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.743 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 385a384c-5df0-4b04-b928-517a46df04f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:42:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.745 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba95bf41-5ab7-45b1-b3cb-5d0a0076fd25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.746 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace which is not needed anymore#033[00m
Oct  2 08:42:36 np0005466030 nova_compute[230518]: 2025-10-02 12:42:36.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:36 np0005466030 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct  2 08:42:36 np0005466030 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000070.scope: Consumed 14.924s CPU time.
Oct  2 08:42:36 np0005466030 systemd-machined[188247]: Machine qemu-55-instance-00000070 terminated.
Oct  2 08:42:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:36 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : haproxy version is 2.8.14-c23fe91
Oct  2 08:42:36 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : path to executable is /usr/sbin/haproxy
Oct  2 08:42:36 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [WARNING]  (273974) : Exiting Master process...
Oct  2 08:42:36 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [WARNING]  (273974) : Exiting Master process...
Oct  2 08:42:36 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [ALERT]    (273974) : Current worker (273976) exited with code 143 (Terminated)
Oct  2 08:42:36 np0005466030 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [WARNING]  (273974) : All workers exited. Exiting... (0)
Oct  2 08:42:36 np0005466030 systemd[1]: libpod-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0.scope: Deactivated successfully.
Oct  2 08:42:36 np0005466030 podman[274892]: 2025-10-02 12:42:36.894787469 +0000 UTC m=+0.057016354 container died 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:42:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0-userdata-shm.mount: Deactivated successfully.
Oct  2 08:42:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay-3f06bb73ff4fd07214bf62b4ab474c808902a8097ac5ccfb1095b1198b7d0666-merged.mount: Deactivated successfully.
Oct  2 08:42:36 np0005466030 podman[274892]: 2025-10-02 12:42:36.933060692 +0000 UTC m=+0.095289567 container cleanup 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:42:36 np0005466030 systemd[1]: libpod-conmon-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0.scope: Deactivated successfully.
Oct  2 08:42:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Oct  2 08:42:37 np0005466030 podman[274924]: 2025-10-02 12:42:37.062137271 +0000 UTC m=+0.104072243 container remove 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.069 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[026bbaf7-b94c-471a-86df-d6f7b72094c3]: (4, ('Thu Oct  2 12:42:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0)\n1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0\nThu Oct  2 12:42:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0)\n1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0df22d95-58e1-4709-92aa-340ecd6ccd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.072 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:37 np0005466030 kernel: tap385a384c-50: left promiscuous mode
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.095 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e24b907-7a25-4693-98ae-ac2661a7d916]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.132 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[188825b2-67ff-4e76-87a7-2ea214a0f0a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.135 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[55a7e357-d39c-4c86-aa62-268cca869667]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.155 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[410411e5-c3ba-42bd-9504-006628a53cd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677873, 'reachable_time': 21887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274954, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:37 np0005466030 systemd[1]: run-netns-ovnmeta\x2d385a384c\x2d5df0\x2d4b04\x2db928\x2d517a46df04f4.mount: Deactivated successfully.
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.159 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:42:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.159 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f955be6c-c4b6-4204-888f-a30e0482a35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.203 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.208 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance destroyed successfully.#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.208 2 DEBUG nova.objects.instance [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'numa_topology' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.400 2 DEBUG nova.compute.manager [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.401 2 DEBUG oslo_concurrency.lockutils [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.401 2 DEBUG oslo_concurrency.lockutils [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.401 2 DEBUG oslo_concurrency.lockutils [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.402 2 DEBUG nova.compute.manager [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.402 2 WARNING nova.compute.manager [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state active and task_state shelving.#033[00m
Oct  2 08:42:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:37 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.541 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Beginning cold snapshot process#033[00m
Oct  2 08:42:37 np0005466030 nova_compute[230518]: 2025-10-02 12:42:37.741 2 DEBUG nova.virt.libvirt.imagebackend [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 08:42:38 np0005466030 nova_compute[230518]: 2025-10-02 12:42:38.041 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] creating snapshot(54439c570f6d4146b808add75210b9a8) on rbd image(1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:42:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:42:38 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:42:38 np0005466030 nova_compute[230518]: 2025-10-02 12:42:38.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Oct  2 08:42:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.513 2 DEBUG nova.compute.manager [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.513 2 DEBUG oslo_concurrency.lockutils [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.514 2 DEBUG oslo_concurrency.lockutils [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.514 2 DEBUG oslo_concurrency.lockutils [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.514 2 DEBUG nova.compute.manager [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.514 2 WARNING nova.compute.manager [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.543 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] cloning vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk@54439c570f6d4146b808add75210b9a8 to images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:42:39 np0005466030 nova_compute[230518]: 2025-10-02 12:42:39.919 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] flattening images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:42:40 np0005466030 nova_compute[230518]: 2025-10-02 12:42:40.309 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] removing snapshot(54439c570f6d4146b808add75210b9a8) on rbd image(1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:42:40 np0005466030 nova_compute[230518]: 2025-10-02 12:42:40.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Oct  2 08:42:41 np0005466030 nova_compute[230518]: 2025-10-02 12:42:41.443 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] creating snapshot(snap) on rbd image(c37f4151-ac68-47f8-adfa-bd0c85e4c75d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 08:42:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:41.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Oct  2 08:42:43 np0005466030 nova_compute[230518]: 2025-10-02 12:42:43.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:43.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:43.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.077 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Snapshot image upload complete#033[00m
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.077 2 DEBUG nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.329 2 INFO nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Shelve offloading#033[00m
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.338 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance destroyed successfully.#033[00m
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.338 2 DEBUG nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.341 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.341 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.341 2 DEBUG nova.network.neutron [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:42:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:42:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:45 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:45 np0005466030 nova_compute[230518]: 2025-10-02 12:42:45.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Oct  2 08:42:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:47 np0005466030 nova_compute[230518]: 2025-10-02 12:42:47.231 2 DEBUG nova.network.neutron [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:47 np0005466030 nova_compute[230518]: 2025-10-02 12:42:47.293 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:47.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Oct  2 08:42:47 np0005466030 podman[275147]: 2025-10-02 12:42:47.819523603 +0000 UTC m=+0.063654603 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:42:47 np0005466030 podman[275148]: 2025-10-02 12:42:47.82454302 +0000 UTC m=+0.069162396 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.623 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance destroyed successfully.#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.624 2 DEBUG nova.objects.instance [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'resources' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.642 2 DEBUG nova.virt.libvirt.vif [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMPES/J98pyyGI+xC972/PJIY+D7X9SqOgh45Z1MPKQ6L1b0LXV7IORHQBCxCHGOsCWQssLDPZp4WJ8irI2AsYuAH5MVzTXEt9QIB2bOJQbGultCK6n77bAruhlsubzH7w==',key_name='tempest-keypair-372158786',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member',shelved_at='2025-10-02T12:42:45.077638',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c37f4151-ac68-47f8-adfa-bd0c85e4c75d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.642 2 DEBUG nova.network.os_vif_util [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.643 2 DEBUG nova.network.os_vif_util [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.643 2 DEBUG os_vif [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8eb9e971-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.651 2 INFO os_vif [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59')#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.696 2 DEBUG nova.compute.manager [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.696 2 DEBUG nova.compute.manager [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing instance network info cache due to event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.697 2 DEBUG oslo_concurrency.lockutils [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.697 2 DEBUG oslo_concurrency.lockutils [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:48 np0005466030 nova_compute[230518]: 2025-10-02 12:42:48.697 2 DEBUG nova.network.neutron [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:42:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:50 np0005466030 nova_compute[230518]: 2025-10-02 12:42:50.831 2 DEBUG nova.network.neutron [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updated VIF entry in instance network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:42:50 np0005466030 nova_compute[230518]: 2025-10-02 12:42:50.831 2 DEBUG nova.network.neutron [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8eb9e971-59", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:42:50 np0005466030 nova_compute[230518]: 2025-10-02 12:42:50.859 2 DEBUG oslo_concurrency.lockutils [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:42:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Oct  2 08:42:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:42:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:42:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:51 np0005466030 nova_compute[230518]: 2025-10-02 12:42:51.979 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408956.9787672, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:42:51 np0005466030 nova_compute[230518]: 2025-10-02 12:42:51.980 2 INFO nova.compute.manager [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.013 2 DEBUG nova.compute.manager [None req-92e945be-637a-4019-a96a-e344103808e8 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.017 2 DEBUG nova.compute.manager [None req-92e945be-637a-4019-a96a-e344103808e8 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.040 2 INFO nova.compute.manager [None req-92e945be-637a-4019-a96a-e344103808e8 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.259 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deleting instance files /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_del#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.261 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deletion of /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_del complete#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.343 2 INFO nova.scheduler.client.report [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Deleted allocations for instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.387 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.388 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.425 2 DEBUG oslo_concurrency.processutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3290921538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.859 2 DEBUG oslo_concurrency.processutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.866 2 DEBUG nova.compute.provider_tree [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.890 2 DEBUG nova.scheduler.client.report [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.919 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:52 np0005466030 nova_compute[230518]: 2025-10-02 12:42:52.957 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.452 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.453 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.480 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:42:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:53.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.552 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.552 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.558 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.559 2 INFO nova.compute.claims [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:53 np0005466030 nova_compute[230518]: 2025-10-02 12:42:53.675 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/829567821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.179 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.186 2 DEBUG nova.compute.provider_tree [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.204 2 DEBUG nova.scheduler.client.report [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.238 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.240 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.290 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.291 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.310 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.334 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.436 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.438 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.438 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Creating image(s)#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.466 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.502 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.534 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.539 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.581 2 DEBUG nova.policy [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.638 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.639 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.640 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.640 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.833 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:42:54 np0005466030 nova_compute[230518]: 2025-10-02 12:42:54.836 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 26db575f-26df-4e1b-b0d8-38a12df557e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:55.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:55 np0005466030 nova_compute[230518]: 2025-10-02 12:42:55.942 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 26db575f-26df-4e1b-b0d8-38a12df557e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.072 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.357 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Successfully created port: 3de79762-7d07-45e3-b66d-38b20be62257 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.430 2 DEBUG nova.objects.instance [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 26db575f-26df-4e1b-b0d8-38a12df557e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:42:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.545 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.546 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Ensure instance console log exists: /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.546 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.547 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:56 np0005466030 nova_compute[230518]: 2025-10-02 12:42:56.547 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:42:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:57.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:57.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.228 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.229 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.229 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.229 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.230 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:42:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1302167254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.728 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.944 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:58 np0005466030 nova_compute[230518]: 2025-10-02 12:42:58.945 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.124 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4234MB free_disk=20.868064880371094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.126 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.126 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.266 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.267 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 26db575f-26df-4e1b-b0d8-38a12df557e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.267 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.267 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.317 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.414 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Successfully updated port: 3de79762-7d07-45e3-b66d-38b20be62257 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.470 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.471 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.472 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:42:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:59.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:42:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:42:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:59.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.583 2 DEBUG nova.compute.manager [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-changed-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.584 2 DEBUG nova.compute.manager [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing instance network info cache due to event network-changed-3de79762-7d07-45e3-b66d-38b20be62257. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.584 2 DEBUG oslo_concurrency.lockutils [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:42:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:42:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2706070081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.767 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.773 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.802 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.884 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.957 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:42:59 np0005466030 nova_compute[230518]: 2025-10-02 12:42:59.958 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/889090069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.120 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.294 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.295 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance network_info: |[{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.296 2 DEBUG oslo_concurrency.lockutils [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.297 2 DEBUG nova.network.neutron [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.303 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start _get_guest_xml network_info=[{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.308 2 WARNING nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.313 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.314 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.316 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.316 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.318 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.318 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.319 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.319 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.319 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.320 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.320 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.320 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.321 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.321 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.321 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.322 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.325 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3097588909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.761 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.789 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:01 np0005466030 nova_compute[230518]: 2025-10-02 12:43:01.793 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2835615240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.493 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.496 2 DEBUG nova.virt.libvirt.vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-310646740',display_name='tempest-ServerActionsTestOtherA-server-310646740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-310646740',id=115,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-8ebo56vt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:54Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=26db575f-26df-4e1b-b0d8-38a12df557e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.497 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.499 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.502 2 DEBUG nova.objects.instance [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 26db575f-26df-4e1b-b0d8-38a12df557e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.530 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <uuid>26db575f-26df-4e1b-b0d8-38a12df557e3</uuid>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <name>instance-00000073</name>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestOtherA-server-310646740</nova:name>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:43:01</nova:creationTime>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <nova:port uuid="3de79762-7d07-45e3-b66d-38b20be62257">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <entry name="serial">26db575f-26df-4e1b-b0d8-38a12df557e3</entry>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <entry name="uuid">26db575f-26df-4e1b-b0d8-38a12df557e3</entry>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/26db575f-26df-4e1b-b0d8-38a12df557e3_disk">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:bb:bf:18"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <target dev="tap3de79762-7d"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/console.log" append="off"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:43:02 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:43:02 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:43:02 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:43:02 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.533 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Preparing to wait for external event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.534 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.534 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.535 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.535 2 DEBUG nova.virt.libvirt.vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-310646740',display_name='tempest-ServerActionsTestOtherA-server-310646740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-310646740',id=115,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-8ebo56vt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:54Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=26db575f-26df-4e1b-b0d8-38a12df557e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.536 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.536 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.537 2 DEBUG os_vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3de79762-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3de79762-7d, col_values=(('external_ids', {'iface-id': '3de79762-7d07-45e3-b66d-38b20be62257', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:bf:18', 'vm-uuid': '26db575f-26df-4e1b-b0d8-38a12df557e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:02 np0005466030 NetworkManager[44960]: <info>  [1759408982.5458] manager: (tap3de79762-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.552 2 INFO os_vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d')#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.800 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.800 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.801 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:bb:bf:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.801 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Using config drive#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.829 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.954 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:02 np0005466030 nova_compute[230518]: 2025-10-02 12:43:02.954 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.063 2 DEBUG nova.network.neutron [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updated VIF entry in instance network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.064 2 DEBUG nova.network.neutron [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.145 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Creating config drive at /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.149 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90qkizeg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.282 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90qkizeg" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.306 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.309 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.336 2 DEBUG oslo_concurrency.lockutils [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.487 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.488 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deleting local config drive /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config because it was imported into RBD.#033[00m
Oct  2 08:43:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:03.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:03 np0005466030 kernel: tap3de79762-7d: entered promiscuous mode
Oct  2 08:43:03 np0005466030 NetworkManager[44960]: <info>  [1759408983.5530] manager: (tap3de79762-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:03Z|00480|binding|INFO|Claiming lport 3de79762-7d07-45e3-b66d-38b20be62257 for this chassis.
Oct  2 08:43:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:03Z|00481|binding|INFO|3de79762-7d07-45e3-b66d-38b20be62257: Claiming fa:16:3e:bb:bf:18 10.100.0.11
Oct  2 08:43:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:03Z|00482|binding|INFO|Setting lport 3de79762-7d07-45e3-b66d-38b20be62257 ovn-installed in OVS
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:03 np0005466030 systemd-machined[188247]: New machine qemu-57-instance-00000073.
Oct  2 08:43:03 np0005466030 systemd[1]: Started Virtual Machine qemu-57-instance-00000073.
Oct  2 08:43:03 np0005466030 systemd-udevd[275595]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:43:03 np0005466030 NetworkManager[44960]: <info>  [1759408983.6253] device (tap3de79762-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:43:03 np0005466030 NetworkManager[44960]: <info>  [1759408983.6264] device (tap3de79762-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:43:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:03Z|00483|binding|INFO|Setting lport 3de79762-7d07-45e3-b66d-38b20be62257 up in Southbound
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.809 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:bf:18 10.100.0.11'], port_security=['fa:16:3e:bb:bf:18 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '26db575f-26df-4e1b-b0d8-38a12df557e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01dffe06-e9c5-44f7-8e0c-9bbbdc67ec7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3de79762-7d07-45e3-b66d-38b20be62257) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.811 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3de79762-7d07-45e3-b66d-38b20be62257 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.813 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.826 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5557a3e9-8c91-4748-a379-755db70a20b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.862 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2661b2-3f08-4ab4-a42f-2a74e0e7ee3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.865 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[72408b39-0423-4f3c-8d8b-a8c128860363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.896 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d78e34b9-a2d0-4961-99d6-ab902d43e1e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.917 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbed73c-6787-496a-a84d-706c6f3fe6bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275609, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.936 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27d5c726-a6e4-4c6b-8a48-a216604a2964]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275610, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275610, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.938 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:03 np0005466030 nova_compute[230518]: 2025-10-02 12:43:03.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.940 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.940 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.941 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.941 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG nova.compute.manager [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG oslo_concurrency.lockutils [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG oslo_concurrency.lockutils [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG oslo_concurrency.lockutils [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.261 2 DEBUG nova.compute.manager [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Processing event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:43:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:04.894 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:04 np0005466030 nova_compute[230518]: 2025-10-02 12:43:04.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:04.896 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.015 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.017 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408985.0150387, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.017 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Started (Lifecycle Event)#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.020 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.025 2 INFO nova.virt.libvirt.driver [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance spawned successfully.#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.025 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.287 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.291 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.392 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.392 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.393 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.393 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.394 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.394 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.479 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.479 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408985.016665, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.479 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:43:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:05.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.578 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.584 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408985.019841, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.584 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.678 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.681 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.863 2 INFO nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 11.43 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.864 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:05 np0005466030 nova_compute[230518]: 2025-10-02 12:43:05.897 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.000 2 INFO nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 12.47 seconds to build instance.#033[00m
Oct  2 08:43:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.423 2 DEBUG nova.compute.manager [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.423 2 DEBUG oslo_concurrency.lockutils [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.424 2 DEBUG oslo_concurrency.lockutils [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.424 2 DEBUG oslo_concurrency.lockutils [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.424 2 DEBUG nova.compute.manager [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] No waiting events found dispatching network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.424 2 WARNING nova.compute.manager [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received unexpected event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:43:06 np0005466030 nova_compute[230518]: 2025-10-02 12:43:06.426 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:06 np0005466030 podman[275654]: 2025-10-02 12:43:06.803134796 +0000 UTC m=+0.054508275 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:43:06 np0005466030 podman[275653]: 2025-10-02 12:43:06.828237875 +0000 UTC m=+0.078763508 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:43:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:07.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:07.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:07 np0005466030 nova_compute[230518]: 2025-10-02 12:43:07.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:07.899 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:08 np0005466030 nova_compute[230518]: 2025-10-02 12:43:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:08 np0005466030 nova_compute[230518]: 2025-10-02 12:43:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:43:08 np0005466030 nova_compute[230518]: 2025-10-02 12:43:08.189 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Oct  2 08:43:08 np0005466030 nova_compute[230518]: 2025-10-02 12:43:08.190 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:43:08 np0005466030 nova_compute[230518]: 2025-10-02 12:43:08.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:09.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:10 np0005466030 nova_compute[230518]: 2025-10-02 12:43:10.534 2 DEBUG nova.compute.manager [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-changed-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:10 np0005466030 nova_compute[230518]: 2025-10-02 12:43:10.535 2 DEBUG nova.compute.manager [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing instance network info cache due to event network-changed-3de79762-7d07-45e3-b66d-38b20be62257. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:10 np0005466030 nova_compute[230518]: 2025-10-02 12:43:10.536 2 DEBUG oslo_concurrency.lockutils [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:10 np0005466030 nova_compute[230518]: 2025-10-02 12:43:10.536 2 DEBUG oslo_concurrency.lockutils [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:10 np0005466030 nova_compute[230518]: 2025-10-02 12:43:10.536 2 DEBUG nova.network.neutron [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:10Z|00484|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct  2 08:43:10 np0005466030 nova_compute[230518]: 2025-10-02 12:43:10.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:11.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:12 np0005466030 nova_compute[230518]: 2025-10-02 12:43:12.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Oct  2 08:43:13 np0005466030 nova_compute[230518]: 2025-10-02 12:43:13.317 2 DEBUG nova.network.neutron [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updated VIF entry in instance network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:13 np0005466030 nova_compute[230518]: 2025-10-02 12:43:13.318 2 DEBUG nova.network.neutron [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:13 np0005466030 nova_compute[230518]: 2025-10-02 12:43:13.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:13 np0005466030 nova_compute[230518]: 2025-10-02 12:43:13.498 2 DEBUG oslo_concurrency.lockutils [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:13.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:13.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:17.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:17 np0005466030 nova_compute[230518]: 2025-10-02 12:43:17.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:18 np0005466030 nova_compute[230518]: 2025-10-02 12:43:18.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:18 np0005466030 podman[275698]: 2025-10-02 12:43:18.820757445 +0000 UTC m=+0.066686838 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 08:43:18 np0005466030 podman[275699]: 2025-10-02 12:43:18.830978766 +0000 UTC m=+0.076753114 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  2 08:43:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:19.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:19.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:21.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Oct  2 08:43:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:22 np0005466030 nova_compute[230518]: 2025-10-02 12:43:22.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:22 np0005466030 nova_compute[230518]: 2025-10-02 12:43:22.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:23 np0005466030 nova_compute[230518]: 2025-10-02 12:43:23.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:23 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:25.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:25 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:25.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:25.939 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:26 np0005466030 nova_compute[230518]: 2025-10-02 12:43:26.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:27 np0005466030 nova_compute[230518]: 2025-10-02 12:43:27.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:28 np0005466030 nova_compute[230518]: 2025-10-02 12:43:28.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:29 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:31.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:31 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:31.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:32 np0005466030 nova_compute[230518]: 2025-10-02 12:43:32.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466030 nova_compute[230518]: 2025-10-02 12:43:33.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:33.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:33.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.581 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.581 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.601 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.677 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.677 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.686 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.687 2 INFO nova.compute.claims [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:43:34 np0005466030 nova_compute[230518]: 2025-10-02 12:43:34.841 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:35 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/469806506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.807 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.966s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.816 2 DEBUG nova.compute.provider_tree [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.851 2 DEBUG nova.scheduler.client.report [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.871 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.872 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.926 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.927 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.955 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:43:35 np0005466030 nova_compute[230518]: 2025-10-02 12:43:35.982 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.048 2 INFO nova.virt.block_device [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Booting with volume fdc5e1d9-2228-4ec0-a6bb-8605f6207831 at /dev/vda#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.170 2 DEBUG os_brick.utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.171 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.192 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.192 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7340b5d6-2513-434f-8e66-8f909e371570]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.194 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.200 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.201 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d92e4a3c-d257-4edc-871c-a11c41d3ab33]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.203 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.209 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.209 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[42ce9ee4-4ec1-45b4-88d8-df701b2a5ea8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.211 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[84beef6c-ed76-4621-822d-58069b345397]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.212 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.245 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.249 2 DEBUG os_brick.utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.249 2 DEBUG nova.virt.block_device [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating existing volume attachment record: 01c1187a-4578-4575-a086-d9c200afe7f4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:43:36 np0005466030 nova_compute[230518]: 2025-10-02 12:43:36.285 2 DEBUG nova.policy [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:43:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:37 np0005466030 nova_compute[230518]: 2025-10-02 12:43:37.427 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Successfully created port: ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:43:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:37 np0005466030 nova_compute[230518]: 2025-10-02 12:43:37.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:37 np0005466030 podman[275766]: 2025-10-02 12:43:37.836757389 +0000 UTC m=+0.082553057 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:43:37 np0005466030 podman[275765]: 2025-10-02 12:43:37.876489188 +0000 UTC m=+0.125261289 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.055 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.057 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.057 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Creating image(s)#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.057 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Ensure instance console log exists: /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.573 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Successfully updated port: ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.592 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.592 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.593 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.656 2 DEBUG nova.compute.manager [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.657 2 DEBUG nova.compute.manager [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing instance network info cache due to event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.657 2 DEBUG oslo_concurrency.lockutils [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:38 np0005466030 nova_compute[230518]: 2025-10-02 12:43:38.771 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:43:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:39 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:43:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:43:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.839 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.878 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.878 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance network_info: |[{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.878 2 DEBUG oslo_concurrency.lockutils [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.879 2 DEBUG nova.network.neutron [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.882 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start _get_guest_xml network_info=[{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '173830cb-12bb-4e1a-ba80-088da01ad107', 'attached_at': '', 'detached_at': '', 'volume_id': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'serial': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831'}, 'boot_index': 0, 'attachment_id': '01c1187a-4578-4575-a086-d9c200afe7f4', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.886 2 WARNING nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.891 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.892 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.894 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.894 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.895 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.896 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.896 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.896 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:43:39 np0005466030 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.032 2 DEBUG nova.storage.rbd_utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.036 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:43:40 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3663326302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.580 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.603 2 DEBUG nova.virt.libvirt.vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.604 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.605 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.606 2 DEBUG nova.objects.instance [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.622 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <uuid>173830cb-12bb-4e1a-ba80-088da01ad107</uuid>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <name>instance-00000077</name>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestOtherA-server-716874932</nova:name>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:43:39</nova:creationTime>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <nova:port uuid="ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <entry name="serial">173830cb-12bb-4e1a-ba80-088da01ad107</entry>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <entry name="uuid">173830cb-12bb-4e1a-ba80-088da01ad107</entry>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/173830cb-12bb-4e1a-ba80-088da01ad107_disk.config">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-fdc5e1d9-2228-4ec0-a6bb-8605f6207831">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <serial>fdc5e1d9-2228-4ec0-a6bb-8605f6207831</serial>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:d6:35:d6"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <target dev="tapea4a4acf-33"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/console.log" append="off"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:43:40 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:43:40 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:43:40 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:43:40 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Preparing to wait for external event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.625 2 DEBUG nova.virt.libvirt.vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.625 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.626 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.626 2 DEBUG os_vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea4a4acf-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea4a4acf-33, col_values=(('external_ids', {'iface-id': 'ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:35:d6', 'vm-uuid': '173830cb-12bb-4e1a-ba80-088da01ad107'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:40 np0005466030 NetworkManager[44960]: <info>  [1759409020.6330] manager: (tapea4a4acf-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.640 2 INFO os_vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.845 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.846 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.846 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:d6:35:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.846 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Using config drive#033[00m
Oct  2 08:43:40 np0005466030 nova_compute[230518]: 2025-10-02 12:43:40.905 2 DEBUG nova.storage.rbd_utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:41 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.834218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021834255, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2193, "num_deletes": 263, "total_data_size": 4801876, "memory_usage": 4856784, "flush_reason": "Manual Compaction"}
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021862130, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 3143689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46825, "largest_seqno": 49013, "table_properties": {"data_size": 3134738, "index_size": 5509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19614, "raw_average_key_size": 20, "raw_value_size": 3116370, "raw_average_value_size": 3290, "num_data_blocks": 239, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408858, "oldest_key_time": 1759408858, "file_creation_time": 1759409021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 27972 microseconds, and 6566 cpu microseconds.
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.862185) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 3143689 bytes OK
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.862209) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.865988) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.866003) EVENT_LOG_v1 {"time_micros": 1759409021865997, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.866022) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 4791890, prev total WAL file size 4791890, number of live WAL files 2.
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.867400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(3070KB)], [90(10MB)]
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021867450, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 14586912, "oldest_snapshot_seqno": -1}
Oct  2 08:43:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7541 keys, 14437822 bytes, temperature: kUnknown
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022036469, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 14437822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14383308, "index_size": 34562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 193517, "raw_average_key_size": 25, "raw_value_size": 14244711, "raw_average_value_size": 1888, "num_data_blocks": 1379, "num_entries": 7541, "num_filter_entries": 7541, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.036689) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 14437822 bytes
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.080984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.3 rd, 85.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.9 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 8081, records dropped: 540 output_compression: NoCompression
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.081027) EVENT_LOG_v1 {"time_micros": 1759409022081010, "job": 56, "event": "compaction_finished", "compaction_time_micros": 169087, "compaction_time_cpu_micros": 54488, "output_level": 6, "num_output_files": 1, "total_output_size": 14437822, "num_input_records": 8081, "num_output_records": 7541, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022081929, "job": 56, "event": "table_file_deletion", "file_number": 92}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022084496, "job": 56, "event": "table_file_deletion", "file_number": 90}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.867194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.178465) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022178528, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 250, "total_data_size": 23018, "memory_usage": 28664, "flush_reason": "Manual Compaction"}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022184678, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 13846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49015, "largest_seqno": 49269, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409022, "oldest_key_time": 1759409022, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 6255 microseconds, and 881 cpu microseconds.
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.184730) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 13846 bytes OK
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.184750) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.187753) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.187769) EVENT_LOG_v1 {"time_micros": 1759409022187764, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.187785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 21000, prev total WAL file size 21000, number of live WAL files 2.
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.188127) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(13KB)], [93(13MB)]
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022188185, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 14451668, "oldest_snapshot_seqno": -1}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7292 keys, 10623149 bytes, temperature: kUnknown
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022282197, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10623149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10575226, "index_size": 28611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188535, "raw_average_key_size": 25, "raw_value_size": 10445844, "raw_average_value_size": 1432, "num_data_blocks": 1130, "num_entries": 7292, "num_filter_entries": 7292, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.282593) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10623149 bytes
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.288344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.5 rd, 112.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(1811.0) write-amplify(767.2) OK, records in: 7796, records dropped: 504 output_compression: NoCompression
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.288377) EVENT_LOG_v1 {"time_micros": 1759409022288363, "job": 58, "event": "compaction_finished", "compaction_time_micros": 94174, "compaction_time_cpu_micros": 44347, "output_level": 6, "num_output_files": 1, "total_output_size": 10623149, "num_input_records": 7796, "num_output_records": 7292, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022288542, "job": 58, "event": "table_file_deletion", "file_number": 95}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022293208, "job": 58, "event": "table_file_deletion", "file_number": 93}
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.188057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:43:42 np0005466030 nova_compute[230518]: 2025-10-02 12:43:42.348 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Creating config drive at /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config#033[00m
Oct  2 08:43:42 np0005466030 nova_compute[230518]: 2025-10-02 12:43:42.356 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz24ci6es execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:42 np0005466030 nova_compute[230518]: 2025-10-02 12:43:42.499 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz24ci6es" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:42 np0005466030 nova_compute[230518]: 2025-10-02 12:43:42.538 2 DEBUG nova.storage.rbd_utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:43:42 np0005466030 nova_compute[230518]: 2025-10-02 12:43:42.543 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.480 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.937s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.480 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Deleting local config drive /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config because it was imported into RBD.#033[00m
Oct  2 08:43:43 np0005466030 kernel: tapea4a4acf-33: entered promiscuous mode
Oct  2 08:43:43 np0005466030 NetworkManager[44960]: <info>  [1759409023.5444] manager: (tapea4a4acf-33): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:43Z|00485|binding|INFO|Claiming lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for this chassis.
Oct  2 08:43:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:43Z|00486|binding|INFO|ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb: Claiming fa:16:3e:d6:35:d6 10.100.0.13
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.556 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:35:d6 10.100.0.13'], port_security=['fa:16:3e:d6:35:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '173830cb-12bb-4e1a-ba80-088da01ad107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.557 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.559 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b#033[00m
Oct  2 08:43:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:43 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:43Z|00487|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb ovn-installed in OVS
Oct  2 08:43:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:43:43Z|00488|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb up in Southbound
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005466030 systemd-udevd[276053]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c49a3bef-a3dc-4f77-92b2-76460328680f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005466030 systemd-machined[188247]: New machine qemu-58-instance-00000077.
Oct  2 08:43:43 np0005466030 NetworkManager[44960]: <info>  [1759409023.5972] device (tapea4a4acf-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:43:43 np0005466030 NetworkManager[44960]: <info>  [1759409023.5980] device (tapea4a4acf-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:43:43 np0005466030 systemd[1]: Started Virtual Machine qemu-58-instance-00000077.
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.623 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b1c726-83c7-456e-a4d0-b0f86fd8bb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.626 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a73d6b-cf58-424e-8647-efb5d3f60cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.653 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[85223aae-efae-4cdf-824e-9fa8e532c19c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.670 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2e0f93-9c82-48af-bff0-7d48470a8f81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276066, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6c28ee-88cb-4e9b-be17-652814f230bd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276068, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276068, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.693 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005466030 nova_compute[230518]: 2025-10-02 12:43:43.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.696 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.697 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.697 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:43:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.698 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:43:44 np0005466030 nova_compute[230518]: 2025-10-02 12:43:44.363 2 DEBUG nova.network.neutron [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updated VIF entry in instance network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:44 np0005466030 nova_compute[230518]: 2025-10-02 12:43:44.364 2 DEBUG nova.network.neutron [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:44 np0005466030 nova_compute[230518]: 2025-10-02 12:43:44.411 2 DEBUG oslo_concurrency.lockutils [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:44 np0005466030 nova_compute[230518]: 2025-10-02 12:43:44.967 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409024.9670033, 173830cb-12bb-4e1a-ba80-088da01ad107 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:44 np0005466030 nova_compute[230518]: 2025-10-02 12:43:44.968 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Started (Lifecycle Event)#033[00m
Oct  2 08:43:44 np0005466030 nova_compute[230518]: 2025-10-02 12:43:44.996 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.001 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409024.9676995, 173830cb-12bb-4e1a-ba80-088da01ad107 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.001 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.020 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.023 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.045 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:45.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.631 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.632 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.632 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.633 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.633 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Processing event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.634 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.634 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.635 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.635 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.636 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.636 2 WARNING nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.638 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.642 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409025.6414685, 173830cb-12bb-4e1a-ba80-088da01ad107 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.643 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.646 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.649 2 INFO nova.virt.libvirt.driver [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance spawned successfully.#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.650 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.669 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.676 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.681 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.681 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.682 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.682 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.682 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.683 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.713 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.749 2 INFO nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Took 7.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.750 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.832 2 INFO nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Took 11.18 seconds to build instance.#033[00m
Oct  2 08:43:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3103788107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:45 np0005466030 nova_compute[230518]: 2025-10-02 12:43:45.854 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:47.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:43:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:47.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:43:48 np0005466030 nova_compute[230518]: 2025-10-02 12:43:48.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:49 np0005466030 nova_compute[230518]: 2025-10-02 12:43:49.432 2 DEBUG nova.compute.manager [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:49 np0005466030 nova_compute[230518]: 2025-10-02 12:43:49.432 2 DEBUG nova.compute.manager [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing instance network info cache due to event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:49 np0005466030 nova_compute[230518]: 2025-10-02 12:43:49.432 2 DEBUG oslo_concurrency.lockutils [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:49 np0005466030 nova_compute[230518]: 2025-10-02 12:43:49.433 2 DEBUG oslo_concurrency.lockutils [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:49 np0005466030 nova_compute[230518]: 2025-10-02 12:43:49.433 2 DEBUG nova.network.neutron [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:49.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:49 np0005466030 podman[276112]: 2025-10-02 12:43:49.823307876 +0000 UTC m=+0.071831629 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:43:49 np0005466030 podman[276111]: 2025-10-02 12:43:49.836406079 +0000 UTC m=+0.088821645 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:43:50 np0005466030 nova_compute[230518]: 2025-10-02 12:43:50.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:50 np0005466030 nova_compute[230518]: 2025-10-02 12:43:50.850 2 DEBUG nova.network.neutron [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated VIF entry in instance network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:50 np0005466030 nova_compute[230518]: 2025-10-02 12:43:50.851 2 DEBUG nova.network.neutron [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:50 np0005466030 nova_compute[230518]: 2025-10-02 12:43:50.877 2 DEBUG oslo_concurrency.lockutils [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:51 np0005466030 nova_compute[230518]: 2025-10-02 12:43:51.708 2 DEBUG nova.compute.manager [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:43:51 np0005466030 nova_compute[230518]: 2025-10-02 12:43:51.709 2 DEBUG nova.compute.manager [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing instance network info cache due to event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:43:51 np0005466030 nova_compute[230518]: 2025-10-02 12:43:51.709 2 DEBUG oslo_concurrency.lockutils [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:51 np0005466030 nova_compute[230518]: 2025-10-02 12:43:51.709 2 DEBUG oslo_concurrency.lockutils [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:51 np0005466030 nova_compute[230518]: 2025-10-02 12:43:51.710 2 DEBUG nova.network.neutron [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:43:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:53 np0005466030 nova_compute[230518]: 2025-10-02 12:43:53.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:53.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:55 np0005466030 nova_compute[230518]: 2025-10-02 12:43:55.277 2 DEBUG nova.network.neutron [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updated VIF entry in instance network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:43:55 np0005466030 nova_compute[230518]: 2025-10-02 12:43:55.277 2 DEBUG nova.network.neutron [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:55 np0005466030 nova_compute[230518]: 2025-10-02 12:43:55.350 2 DEBUG oslo_concurrency.lockutils [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:43:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:43:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:43:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:55 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:55 np0005466030 nova_compute[230518]: 2025-10-02 12:43:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:55 np0005466030 nova_compute[230518]: 2025-10-02 12:43:55.904 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:43:55 np0005466030 nova_compute[230518]: 2025-10-02 12:43:55.905 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:43:55 np0005466030 nova_compute[230518]: 2025-10-02 12:43:55.905 2 DEBUG nova.network.neutron [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:43:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:43:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:57.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.075 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:43:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/707952787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.537 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.624 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.625 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.630 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.631 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.636 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.637 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.819 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.820 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3845MB free_disk=20.82009506225586GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.820 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.821 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.881 2 INFO nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating resource usage from migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.913 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.914 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 26db575f-26df-4e1b-b0d8-38a12df557e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.914 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.914 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.915 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:43:58 np0005466030 nova_compute[230518]: 2025-10-02 12:43:58.997 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.305 2 DEBUG nova.network.neutron [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.339 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:43:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:43:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1830993686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.471 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.472 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Creating file /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.472 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.503 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.510 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.530 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.559 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.560 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:43:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:59.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:43:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:43:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:59.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.949 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp" returned: 1 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.950 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.950 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Creating directory /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:43:59 np0005466030 nova_compute[230518]: 2025-10-02 12:43:59.950 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:00 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  2 08:44:00 np0005466030 nova_compute[230518]: 2025-10-02 12:44:00.185 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:00 np0005466030 nova_compute[230518]: 2025-10-02 12:44:00.188 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:44:00 np0005466030 nova_compute[230518]: 2025-10-02 12:44:00.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:44:00Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:35:d6 10.100.0.13
Oct  2 08:44:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:44:00Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:35:d6 10.100.0.13
Oct  2 08:44:01 np0005466030 nova_compute[230518]: 2025-10-02 12:44:01.561 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:01 np0005466030 nova_compute[230518]: 2025-10-02 12:44:01.562 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:01.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:01.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:03 np0005466030 nova_compute[230518]: 2025-10-02 12:44:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:03 np0005466030 nova_compute[230518]: 2025-10-02 12:44:03.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:03 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:04 np0005466030 nova_compute[230518]: 2025-10-02 12:44:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:05 np0005466030 nova_compute[230518]: 2025-10-02 12:44:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:44:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:44:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:44:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:44:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:05 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:05.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:05.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:05 np0005466030 nova_compute[230518]: 2025-10-02 12:44:05.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:06 np0005466030 nova_compute[230518]: 2025-10-02 12:44:06.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:06 np0005466030 nova_compute[230518]: 2025-10-02 12:44:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:06 np0005466030 nova_compute[230518]: 2025-10-02 12:44:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.902190) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046902225, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 530, "num_deletes": 251, "total_data_size": 755363, "memory_usage": 766512, "flush_reason": "Manual Compaction"}
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046934107, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 498458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49274, "largest_seqno": 49799, "table_properties": {"data_size": 495592, "index_size": 838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6983, "raw_average_key_size": 19, "raw_value_size": 489860, "raw_average_value_size": 1356, "num_data_blocks": 36, "num_entries": 361, "num_filter_entries": 361, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409022, "oldest_key_time": 1759409022, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 31972 microseconds, and 2332 cpu microseconds.
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.934157) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 498458 bytes OK
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.934178) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966205) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966248) EVENT_LOG_v1 {"time_micros": 1759409046966239, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966272) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 752206, prev total WAL file size 752206, number of live WAL files 2.
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966932) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(486KB)], [96(10MB)]
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046967044, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11121607, "oldest_snapshot_seqno": -1}
Oct  2 08:44:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7140 keys, 9260658 bytes, temperature: kUnknown
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047084894, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9260658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9214996, "index_size": 26734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 186131, "raw_average_key_size": 26, "raw_value_size": 9089465, "raw_average_value_size": 1273, "num_data_blocks": 1045, "num_entries": 7140, "num_filter_entries": 7140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.085653) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9260658 bytes
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.104326) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.3 rd, 78.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(40.9) write-amplify(18.6) OK, records in: 7653, records dropped: 513 output_compression: NoCompression
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.104360) EVENT_LOG_v1 {"time_micros": 1759409047104347, "job": 60, "event": "compaction_finished", "compaction_time_micros": 117911, "compaction_time_cpu_micros": 44188, "output_level": 6, "num_output_files": 1, "total_output_size": 9260658, "num_input_records": 7653, "num_output_records": 7140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047104615, "job": 60, "event": "table_file_deletion", "file_number": 98}
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047106240, "job": 60, "event": "table_file_deletion", "file_number": 96}
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:44:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:07.556 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:44:07 np0005466030 nova_compute[230518]: 2025-10-02 12:44:07.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:07 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:07.558 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:44:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:07.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:07 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:07.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:08 np0005466030 nova_compute[230518]: 2025-10-02 12:44:08.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:08 np0005466030 podman[276250]: 2025-10-02 12:44:08.82702012 +0000 UTC m=+0.074411991 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 08:44:08 np0005466030 podman[276249]: 2025-10-02 12:44:08.834057541 +0000 UTC m=+0.084018682 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:44:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:09.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:09 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:09.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.231 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.252 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.253 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.253 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.253 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:10 np0005466030 nova_compute[230518]: 2025-10-02 12:44:10.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:11.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:11.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:13 np0005466030 nova_compute[230518]: 2025-10-02 12:44:13.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:13 np0005466030 kernel: tapea4a4acf-33 (unregistering): left promiscuous mode
Oct  2 08:44:13 np0005466030 NetworkManager[44960]: <info>  [1759409053.5297] device (tapea4a4acf-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:44:13 np0005466030 nova_compute[230518]: 2025-10-02 12:44:13.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:44:13Z|00489|binding|INFO|Releasing lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb from this chassis (sb_readonly=0)
Oct  2 08:44:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:44:13Z|00490|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb down in Southbound
Oct  2 08:44:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:44:13Z|00491|binding|INFO|Removing iface tapea4a4acf-33 ovn-installed in OVS
Oct  2 08:44:13 np0005466030 nova_compute[230518]: 2025-10-02 12:44:13.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:13 np0005466030 nova_compute[230518]: 2025-10-02 12:44:13.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:13 np0005466030 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct  2 08:44:13 np0005466030 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000077.scope: Consumed 13.964s CPU time.
Oct  2 08:44:13 np0005466030 systemd-machined[188247]: Machine qemu-58-instance-00000077 terminated.
Oct  2 08:44:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:13.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:13 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:13.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.813 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:35:d6 10.100.0.13'], port_security=['fa:16:3e:d6:35:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '173830cb-12bb-4e1a-ba80-088da01ad107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.815 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.818 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6bfe5a-ede3-4d65-905f-62d38817ee88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.890 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3169ff-f1b6-4c42-bed8-e6e29d6884fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.893 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0a2e93-a7ce-4c24-8642-0266ab903021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.919 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[887f96b9-91e5-45e6-8830-f18615a6f3b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.937 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c9b21c-d055-4a53-a5e4-b3d353e9a633]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 31164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276315, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.955 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bef14b5d-10e8-4174-874f-82c43848b416]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276316, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276316, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.957 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:13 np0005466030 nova_compute[230518]: 2025-10-02 12:44:13.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.965 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.965 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:44:13 np0005466030 nova_compute[230518]: 2025-10-02 12:44:13.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.966 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.966 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.252 2 INFO nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance shutdown successfully after 14 seconds.#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.263 2 INFO nova.virt.libvirt.driver [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance destroyed successfully.#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.264 2 DEBUG nova.virt.libvirt.vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.265 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.266 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.267 2 DEBUG os_vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea4a4acf-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.278 2 INFO os_vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.286 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.287 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.586 2 DEBUG nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.586 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.587 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.587 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.587 2 DEBUG nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.587 2 WARNING nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.671 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.760 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:14 np0005466030 nova_compute[230518]: 2025-10-02 12:44:14.760 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:44:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:15.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:15 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:15.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:16 np0005466030 nova_compute[230518]: 2025-10-02 12:44:16.260 2 DEBUG neutronclient.v2_0.client [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:44:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:16.562 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:16 np0005466030 nova_compute[230518]: 2025-10-02 12:44:16.800 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:16 np0005466030 nova_compute[230518]: 2025-10-02 12:44:16.801 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:16 np0005466030 nova_compute[230518]: 2025-10-02 12:44:16.801 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:17.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:17 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:17.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:17 np0005466030 nova_compute[230518]: 2025-10-02 12:44:17.624 2 DEBUG nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:17 np0005466030 nova_compute[230518]: 2025-10-02 12:44:17.625 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:17 np0005466030 nova_compute[230518]: 2025-10-02 12:44:17.625 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:17 np0005466030 nova_compute[230518]: 2025-10-02 12:44:17.625 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:17 np0005466030 nova_compute[230518]: 2025-10-02 12:44:17.626 2 DEBUG nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:17 np0005466030 nova_compute[230518]: 2025-10-02 12:44:17.626 2 WARNING nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:44:18 np0005466030 nova_compute[230518]: 2025-10-02 12:44:18.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:19 np0005466030 nova_compute[230518]: 2025-10-02 12:44:19.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:19 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:19.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:19.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:19 np0005466030 nova_compute[230518]: 2025-10-02 12:44:19.754 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:44:20 np0005466030 podman[276319]: 2025-10-02 12:44:20.827533102 +0000 UTC m=+0.079351607 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:44:20 np0005466030 podman[276320]: 2025-10-02 12:44:20.827649976 +0000 UTC m=+0.074958249 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:44:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:21 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:21.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:21.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:23 np0005466030 nova_compute[230518]: 2025-10-02 12:44:23.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:23 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:23.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:23.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:24 np0005466030 nova_compute[230518]: 2025-10-02 12:44:24.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:24 np0005466030 nova_compute[230518]: 2025-10-02 12:44:24.536 2 DEBUG nova.compute.manager [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:24 np0005466030 nova_compute[230518]: 2025-10-02 12:44:24.536 2 DEBUG nova.compute.manager [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing instance network info cache due to event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:44:24 np0005466030 nova_compute[230518]: 2025-10-02 12:44:24.537 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:44:24 np0005466030 nova_compute[230518]: 2025-10-02 12:44:24.537 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:44:24 np0005466030 nova_compute[230518]: 2025-10-02 12:44:24.537 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:44:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:25 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:25.941 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:44:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:27 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:28 np0005466030 nova_compute[230518]: 2025-10-02 12:44:28.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:28 np0005466030 nova_compute[230518]: 2025-10-02 12:44:28.777 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409053.7767065, 173830cb-12bb-4e1a-ba80-088da01ad107 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:44:28 np0005466030 nova_compute[230518]: 2025-10-02 12:44:28.778 2 INFO nova.compute.manager [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:44:29 np0005466030 nova_compute[230518]: 2025-10-02 12:44:29.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:29 np0005466030 nova_compute[230518]: 2025-10-02 12:44:29.317 2 DEBUG nova.compute.manager [None req-a2bc63e7-9cd8-43ad-bdb7-07bca1305961 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:44:29 np0005466030 nova_compute[230518]: 2025-10-02 12:44:29.323 2 DEBUG nova.compute.manager [None req-a2bc63e7-9cd8-43ad-bdb7-07bca1305961 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_migrated, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:44:29 np0005466030 nova_compute[230518]: 2025-10-02 12:44:29.379 2 INFO nova.compute.manager [None req-a2bc63e7-9cd8-43ad-bdb7-07bca1305961 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:44:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:44:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:29 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:31 np0005466030 nova_compute[230518]: 2025-10-02 12:44:31.621 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updated VIF entry in instance network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:44:31 np0005466030 nova_compute[230518]: 2025-10-02 12:44:31.622 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:31.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:31.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:31 np0005466030 nova_compute[230518]: 2025-10-02 12:44:31.676 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Oct  2 08:44:33 np0005466030 nova_compute[230518]: 2025-10-02 12:44:33.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:33.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:33.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:34 np0005466030 nova_compute[230518]: 2025-10-02 12:44:34.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:35.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:36 np0005466030 nova_compute[230518]: 2025-10-02 12:44:36.067 2 DEBUG nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:36 np0005466030 nova_compute[230518]: 2025-10-02 12:44:36.067 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:36 np0005466030 nova_compute[230518]: 2025-10-02 12:44:36.068 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:36 np0005466030 nova_compute[230518]: 2025-10-02 12:44:36.068 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:36 np0005466030 nova_compute[230518]: 2025-10-02 12:44:36.068 2 DEBUG nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:36 np0005466030 nova_compute[230518]: 2025-10-02 12:44:36.068 2 WARNING nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:44:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:37.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:37.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:38 np0005466030 nova_compute[230518]: 2025-10-02 12:44:38.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.604 2 DEBUG nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.604 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.604 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.605 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.605 2 DEBUG nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.605 2 WARNING nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state resized and task_state None.#033[00m
Oct  2 08:44:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:39.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:39.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:39 np0005466030 podman[276359]: 2025-10-02 12:44:39.819146344 +0000 UTC m=+0.065215512 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:44:39 np0005466030 podman[276358]: 2025-10-02 12:44:39.852373379 +0000 UTC m=+0.101959077 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.916 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.916 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:39 np0005466030 nova_compute[230518]: 2025-10-02 12:44:39.917 2 DEBUG nova.compute.manager [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Going to confirm migration 15 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:44:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Oct  2 08:44:41 np0005466030 nova_compute[230518]: 2025-10-02 12:44:41.300 2 DEBUG neutronclient.v2_0.client [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:44:41 np0005466030 nova_compute[230518]: 2025-10-02 12:44:41.301 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:44:41 np0005466030 nova_compute[230518]: 2025-10-02 12:44:41.301 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:44:41 np0005466030 nova_compute[230518]: 2025-10-02 12:44:41.302 2 DEBUG nova.network.neutron [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:44:41 np0005466030 nova_compute[230518]: 2025-10-02 12:44:41.302 2 DEBUG nova.objects.instance [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'info_cache' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:44:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:41.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:44:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:41.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:42 np0005466030 nova_compute[230518]: 2025-10-02 12:44:42.949 2 DEBUG nova.network.neutron [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.045 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.046 2 DEBUG nova.objects.instance [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.145 2 DEBUG nova.storage.rbd_utils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.159 2 DEBUG nova.virt.libvirt.vif [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.159 2 DEBUG nova.network.os_vif_util [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.160 2 DEBUG nova.network.os_vif_util [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.161 2 DEBUG os_vif [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea4a4acf-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.166 2 INFO os_vif [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.167 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.167 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:43 np0005466030 nova_compute[230518]: 2025-10-02 12:44:43.479 2 DEBUG oslo_concurrency.processutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:44:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:43.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:44:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1418788370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:44:44 np0005466030 nova_compute[230518]: 2025-10-02 12:44:44.062 2 DEBUG oslo_concurrency.processutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:44:44 np0005466030 nova_compute[230518]: 2025-10-02 12:44:44.070 2 DEBUG nova.compute.provider_tree [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:44:44 np0005466030 nova_compute[230518]: 2025-10-02 12:44:44.110 2 DEBUG nova.scheduler.client.report [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:44:44 np0005466030 nova_compute[230518]: 2025-10-02 12:44:44.234 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:44 np0005466030 nova_compute[230518]: 2025-10-02 12:44:44.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:44 np0005466030 nova_compute[230518]: 2025-10-02 12:44:44.517 2 INFO nova.scheduler.client.report [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocation for migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d#033[00m
Oct  2 08:44:44 np0005466030 nova_compute[230518]: 2025-10-02 12:44:44.672 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:44:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:45.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:45.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Oct  2 08:44:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:47.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:47.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:48 np0005466030 ovn_controller[129257]: 2025-10-02T12:44:48Z|00492|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  2 08:44:48 np0005466030 nova_compute[230518]: 2025-10-02 12:44:48.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:49 np0005466030 nova_compute[230518]: 2025-10-02 12:44:49.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:44:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:49.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:44:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:49.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Oct  2 08:44:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:51.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:44:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:51.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:44:51 np0005466030 podman[276442]: 2025-10-02 12:44:51.825315674 +0000 UTC m=+0.069885618 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:44:51 np0005466030 podman[276443]: 2025-10-02 12:44:51.83599951 +0000 UTC m=+0.077084115 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:44:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:53 np0005466030 nova_compute[230518]: 2025-10-02 12:44:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:53.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:54 np0005466030 nova_compute[230518]: 2025-10-02 12:44:54.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Oct  2 08:44:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:55.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Oct  2 08:44:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:44:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:57.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:44:58 np0005466030 nova_compute[230518]: 2025-10-02 12:44:58.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:59 np0005466030 nova_compute[230518]: 2025-10-02 12:44:59.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:44:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:44:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:44:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:44:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:44:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:59.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:44:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:44:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:44:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.092 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2698531315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.700 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.990 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.990 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.994 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:00 np0005466030 nova_compute[230518]: 2025-10-02 12:45:00.994 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.166 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.167 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4045MB free_disk=20.695758819580078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.167 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.167 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 26db575f-26df-4e1b-b0d8-38a12df557e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:45:01 np0005466030 nova_compute[230518]: 2025-10-02 12:45:01.774 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Oct  2 08:45:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1186594829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:02 np0005466030 nova_compute[230518]: 2025-10-02 12:45:02.402 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:02 np0005466030 nova_compute[230518]: 2025-10-02 12:45:02.407 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:02 np0005466030 nova_compute[230518]: 2025-10-02 12:45:02.532 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:02 np0005466030 nova_compute[230518]: 2025-10-02 12:45:02.590 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:45:02 np0005466030 nova_compute[230518]: 2025-10-02 12:45:02.591 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:03 np0005466030 nova_compute[230518]: 2025-10-02 12:45:03.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:03.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:04 np0005466030 nova_compute[230518]: 2025-10-02 12:45:04.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:05 np0005466030 nova_compute[230518]: 2025-10-02 12:45:05.587 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:05 np0005466030 nova_compute[230518]: 2025-10-02 12:45:05.588 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:45:05 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:05.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:45:06 np0005466030 nova_compute[230518]: 2025-10-02 12:45:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:06 np0005466030 nova_compute[230518]: 2025-10-02 12:45:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:06 np0005466030 nova_compute[230518]: 2025-10-02 12:45:06.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:06 np0005466030 nova_compute[230518]: 2025-10-02 12:45:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:45:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:45:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:45:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Oct  2 08:45:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:45:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:07.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:45:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:08 np0005466030 nova_compute[230518]: 2025-10-02 12:45:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:08 np0005466030 nova_compute[230518]: 2025-10-02 12:45:08.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:08.335 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:08.338 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:45:08 np0005466030 nova_compute[230518]: 2025-10-02 12:45:08.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:09 np0005466030 nova_compute[230518]: 2025-10-02 12:45:09.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:09.341 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:09 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:09.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:10 np0005466030 nova_compute[230518]: 2025-10-02 12:45:10.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:10 np0005466030 nova_compute[230518]: 2025-10-02 12:45:10.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:45:10 np0005466030 nova_compute[230518]: 2025-10-02 12:45:10.339 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:10 np0005466030 nova_compute[230518]: 2025-10-02 12:45:10.340 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:10 np0005466030 nova_compute[230518]: 2025-10-02 12:45:10.340 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:45:10 np0005466030 podman[276712]: 2025-10-02 12:45:10.845942921 +0000 UTC m=+0.090585860 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:45:10 np0005466030 podman[276711]: 2025-10-02 12:45:10.875315944 +0000 UTC m=+0.120678536 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:45:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.097 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.119 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:45:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.257 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.257 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.258 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.258 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.258 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.259 2 INFO nova.compute.manager [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Terminating instance#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.260 2 DEBUG nova.compute.manager [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:45:12 np0005466030 kernel: tap3de79762-7d (unregistering): left promiscuous mode
Oct  2 08:45:12 np0005466030 NetworkManager[44960]: <info>  [1759409112.3359] device (tap3de79762-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:45:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:12Z|00493|binding|INFO|Releasing lport 3de79762-7d07-45e3-b66d-38b20be62257 from this chassis (sb_readonly=0)
Oct  2 08:45:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:12Z|00494|binding|INFO|Setting lport 3de79762-7d07-45e3-b66d-38b20be62257 down in Southbound
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:12Z|00495|binding|INFO|Removing iface tap3de79762-7d ovn-installed in OVS
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.357 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:bf:18 10.100.0.11'], port_security=['fa:16:3e:bb:bf:18 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '26db575f-26df-4e1b-b0d8-38a12df557e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3de79762-7d07-45e3-b66d-38b20be62257) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3de79762-7d07-45e3-b66d-38b20be62257 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.360 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.379 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8b1234-79d2-4c7c-bd41-01ba0c19219a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.416 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac38691-d430-409b-8c9e-6a082723bc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.419 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[917e4912-84e8-4f42-b179-d3dcbe426cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:12 np0005466030 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct  2 08:45:12 np0005466030 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000073.scope: Consumed 17.960s CPU time.
Oct  2 08:45:12 np0005466030 systemd-machined[188247]: Machine qemu-57-instance-00000073 terminated.
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.446 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8292939d-31ba-42d0-9608-2c884553198c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.468 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dca59b37-d78d-4189-be2c-b6f9eda175c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 31164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276767, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.492 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8564c3-75c4-4ca7-858e-835fa94c0316]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276770, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276770, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.494 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.499 2 INFO nova.virt.libvirt.driver [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance destroyed successfully.#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.499 2 DEBUG nova.objects.instance [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 26db575f-26df-4e1b-b0d8-38a12df557e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.502 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.502 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.503 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.503 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.536 2 DEBUG nova.virt.libvirt.vif [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:42:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-310646740',display_name='tempest-ServerActionsTestOtherA-server-310646740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-310646740',id=115,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-8ebo56vt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:05Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=26db575f-26df-4e1b-b0d8-38a12df557e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.537 2 DEBUG nova.network.os_vif_util [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.537 2 DEBUG nova.network.os_vif_util [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.538 2 DEBUG os_vif [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3de79762-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.547 2 INFO os_vif [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d')#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.665 2 DEBUG nova.compute.manager [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-unplugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.666 2 DEBUG oslo_concurrency.lockutils [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.666 2 DEBUG oslo_concurrency.lockutils [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.667 2 DEBUG oslo_concurrency.lockutils [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.667 2 DEBUG nova.compute.manager [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] No waiting events found dispatching network-vif-unplugged-3de79762-7d07-45e3-b66d-38b20be62257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:12 np0005466030 nova_compute[230518]: 2025-10-02 12:45:12.667 2 DEBUG nova.compute.manager [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-unplugged-3de79762-7d07-45e3-b66d-38b20be62257 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:45:13 np0005466030 nova_compute[230518]: 2025-10-02 12:45:13.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:13 np0005466030 nova_compute[230518]: 2025-10-02 12:45:13.673 2 INFO nova.virt.libvirt.driver [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deleting instance files /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3_del#033[00m
Oct  2 08:45:13 np0005466030 nova_compute[230518]: 2025-10-02 12:45:13.675 2 INFO nova.virt.libvirt.driver [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deletion of /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3_del complete#033[00m
Oct  2 08:45:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:13 np0005466030 nova_compute[230518]: 2025-10-02 12:45:13.735 2 INFO nova.compute.manager [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 1.47 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:45:13 np0005466030 nova_compute[230518]: 2025-10-02 12:45:13.735 2 DEBUG oslo.service.loopingcall [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:45:13 np0005466030 nova_compute[230518]: 2025-10-02 12:45:13.736 2 DEBUG nova.compute.manager [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:45:13 np0005466030 nova_compute[230518]: 2025-10-02 12:45:13.736 2 DEBUG nova.network.neutron [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:45:14 np0005466030 nova_compute[230518]: 2025-10-02 12:45:14.866 2 DEBUG nova.compute.manager [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:14 np0005466030 nova_compute[230518]: 2025-10-02 12:45:14.866 2 DEBUG oslo_concurrency.lockutils [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:14 np0005466030 nova_compute[230518]: 2025-10-02 12:45:14.867 2 DEBUG oslo_concurrency.lockutils [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:14 np0005466030 nova_compute[230518]: 2025-10-02 12:45:14.867 2 DEBUG oslo_concurrency.lockutils [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:14 np0005466030 nova_compute[230518]: 2025-10-02 12:45:14.868 2 DEBUG nova.compute.manager [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] No waiting events found dispatching network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:14 np0005466030 nova_compute[230518]: 2025-10-02 12:45:14.868 2 WARNING nova.compute.manager [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received unexpected event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.249 2 DEBUG nova.network.neutron [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.294 2 INFO nova.compute.manager [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 1.56 seconds to deallocate network for instance.#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.340 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.340 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.410 2 DEBUG oslo_concurrency.processutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.552 2 DEBUG nova.compute.manager [req-0c331114-ae09-4cbf-ba69-749ad88c4e48 req-3e101f11-99f7-4d51-b89b-08772969a05e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-deleted-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:45:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:45:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/889393379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.869 2 DEBUG oslo_concurrency.processutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.876 2 DEBUG nova.compute.provider_tree [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.900 2 DEBUG nova.scheduler.client.report [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.944 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:15 np0005466030 nova_compute[230518]: 2025-10-02 12:45:15.988 2 INFO nova.scheduler.client.report [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocations for instance 26db575f-26df-4e1b-b0d8-38a12df557e3#033[00m
Oct  2 08:45:16 np0005466030 nova_compute[230518]: 2025-10-02 12:45:16.083 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:17 np0005466030 nova_compute[230518]: 2025-10-02 12:45:17.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:17 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.337 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.339 2 INFO nova.compute.manager [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Terminating instance#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.340 2 DEBUG nova.compute.manager [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.509 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.509 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.541 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.638 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.639 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.648 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.648 2 INFO nova.compute.claims [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.866 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:18 np0005466030 kernel: tap8d9cc17a-78 (unregistering): left promiscuous mode
Oct  2 08:45:18 np0005466030 NetworkManager[44960]: <info>  [1759409118.9012] device (tap8d9cc17a-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:45:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:18Z|00496|binding|INFO|Releasing lport 8d9cc17a-7804-4743-925a-496d9fe78c73 from this chassis (sb_readonly=0)
Oct  2 08:45:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:18Z|00497|binding|INFO|Setting lport 8d9cc17a-7804-4743-925a-496d9fe78c73 down in Southbound
Oct  2 08:45:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:18Z|00498|binding|INFO|Removing iface tap8d9cc17a-78 ovn-installed in OVS
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.923 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:d9:d3 10.100.0.14'], port_security=['fa:16:3e:c4:d9:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7621a774-e0bc-4f4f-b900-c3608dd6835a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d9cc17a-7804-4743-925a-496d9fe78c73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.925 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d9cc17a-7804-4743-925a-496d9fe78c73 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis#033[00m
Oct  2 08:45:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.927 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3643647-7cd9-4c43-8aaa-9b0f3160274b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:45:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.928 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7efece6d-e10b-40b1-9acc-adada4cdfdd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.928 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace which is not needed anymore#033[00m
Oct  2 08:45:18 np0005466030 nova_compute[230518]: 2025-10-02 12:45:18.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:18 np0005466030 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct  2 08:45:18 np0005466030 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000069.scope: Consumed 28.734s CPU time.
Oct  2 08:45:18 np0005466030 systemd-machined[188247]: Machine qemu-52-instance-00000069 terminated.
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.180 2 DEBUG nova.compute.manager [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-unplugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.180 2 DEBUG oslo_concurrency.lockutils [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.180 2 DEBUG oslo_concurrency.lockutils [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.181 2 DEBUG oslo_concurrency.lockutils [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.181 2 DEBUG nova.compute.manager [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] No waiting events found dispatching network-vif-unplugged-8d9cc17a-7804-4743-925a-496d9fe78c73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.181 2 DEBUG nova.compute.manager [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-unplugged-8d9cc17a-7804-4743-925a-496d9fe78c73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.184 2 INFO nova.virt.libvirt.driver [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance destroyed successfully.#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.184 2 DEBUG nova.objects.instance [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.207 2 DEBUG nova.virt.libvirt.vif [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1525238782',display_name='tempest-ServerActionsTestOtherA-server-1525238782',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1525238782',id=105,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:39:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-uk3eghdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:39:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=7621a774-e0bc-4f4f-b900-c3608dd6835a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.207 2 DEBUG nova.network.os_vif_util [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.208 2 DEBUG nova.network.os_vif_util [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.208 2 DEBUG os_vif [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9cc17a-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.216 2 INFO os_vif [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78')#033[00m
Oct  2 08:45:19 np0005466030 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : haproxy version is 2.8.14-c23fe91
Oct  2 08:45:19 np0005466030 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : path to executable is /usr/sbin/haproxy
Oct  2 08:45:19 np0005466030 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [WARNING]  (271019) : Exiting Master process...
Oct  2 08:45:19 np0005466030 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [ALERT]    (271019) : Current worker (271021) exited with code 143 (Terminated)
Oct  2 08:45:19 np0005466030 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [WARNING]  (271019) : All workers exited. Exiting... (0)
Oct  2 08:45:19 np0005466030 systemd[1]: libpod-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d.scope: Deactivated successfully.
Oct  2 08:45:19 np0005466030 podman[276848]: 2025-10-02 12:45:19.398136293 +0000 UTC m=+0.367280881 container died 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:45:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3012165453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.575 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.581 2 DEBUG nova.compute.provider_tree [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.603 2 DEBUG nova.scheduler.client.report [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.627 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.628 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.674 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.675 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.695 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.714 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:45:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:19 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:19 np0005466030 systemd[1]: var-lib-containers-storage-overlay-e6ac23ed22d2d8032f0cbc3084da7e2a93ca5dafb47c834f98bbf243d6c598e7-merged.mount: Deactivated successfully.
Oct  2 08:45:19 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d-userdata-shm.mount: Deactivated successfully.
Oct  2 08:45:19 np0005466030 podman[276848]: 2025-10-02 12:45:19.793002958 +0000 UTC m=+0.762147596 container cleanup 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:45:19 np0005466030 systemd[1]: libpod-conmon-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d.scope: Deactivated successfully.
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.816 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.818 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.818 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating image(s)#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.852 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.888 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.919 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:19 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.924 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:19.999 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.000 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.001 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.001 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.028 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.032 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.384 2 DEBUG nova.policy [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b168e90f7c0c414ba26c576fb8706a80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:45:20 np0005466030 podman[276927]: 2025-10-02 12:45:20.646040913 +0000 UTC m=+0.827559344 container remove 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.652 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5688db-68fd-4dca-97a3-53573dffec9f]: (4, ('Thu Oct  2 12:45:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d)\n3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d\nThu Oct  2 12:45:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d)\n3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.654 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11de863f-d61d-45df-8e07-59d6e660f186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.655 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:20 np0005466030 kernel: tapf3643647-70: left promiscuous mode
Oct  2 08:45:20 np0005466030 nova_compute[230518]: 2025-10-02 12:45:20.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.676 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e98c51c7-a3c9-43ee-a93b-385e30919554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.707 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[905d93d8-ff7b-4bb7-a2ed-722015207ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.709 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91267aeb-ec27-4eac-8d6c-05d50ddb5287]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.724 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c86fac-dba6-4cd8-bcd1-5f47ab7a7f83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662984, 'reachable_time': 34933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277032, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:20 np0005466030 systemd[1]: run-netns-ovnmeta\x2df3643647\x2d7cd9\x2d4c43\x2d8aaa\x2d9b0f3160274b.mount: Deactivated successfully.
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.729 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:45:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.729 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[99ae6db7-29c6-4fdd-9d23-0a11ac01f4e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:21 np0005466030 nova_compute[230518]: 2025-10-02 12:45:21.273 2 DEBUG nova.compute.manager [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:21 np0005466030 nova_compute[230518]: 2025-10-02 12:45:21.274 2 DEBUG oslo_concurrency.lockutils [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:21 np0005466030 nova_compute[230518]: 2025-10-02 12:45:21.274 2 DEBUG oslo_concurrency.lockutils [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:21 np0005466030 nova_compute[230518]: 2025-10-02 12:45:21.275 2 DEBUG oslo_concurrency.lockutils [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:21 np0005466030 nova_compute[230518]: 2025-10-02 12:45:21.275 2 DEBUG nova.compute.manager [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] No waiting events found dispatching network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:21 np0005466030 nova_compute[230518]: 2025-10-02 12:45:21.275 2 WARNING nova.compute.manager [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received unexpected event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:45:21 np0005466030 nova_compute[230518]: 2025-10-02 12:45:21.415 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Successfully created port: 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:45:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:21 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.111 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.183 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] resizing rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.280 2 DEBUG nova.objects.instance [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.297 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.297 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Ensure instance console log exists: /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.298 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.298 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:22 np0005466030 nova_compute[230518]: 2025-10-02 12:45:22.298 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:22 np0005466030 podman[277109]: 2025-10-02 12:45:22.806075655 +0000 UTC m=+0.057245811 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct  2 08:45:22 np0005466030 podman[277110]: 2025-10-02 12:45:22.810315369 +0000 UTC m=+0.055120565 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.088 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Successfully updated port: 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.104 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.104 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.105 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.164 2 DEBUG nova.compute.manager [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.164 2 DEBUG nova.compute.manager [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.165 2 DEBUG oslo_concurrency.lockutils [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.239 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:45:23 np0005466030 nova_compute[230518]: 2025-10-02 12:45:23.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:45:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:45:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:24 np0005466030 nova_compute[230518]: 2025-10-02 12:45:24.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.353 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.373 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.374 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance network_info: |[{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.374 2 DEBUG oslo_concurrency.lockutils [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.374 2 DEBUG nova.network.neutron [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.378 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start _get_guest_xml network_info=[{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.384 2 WARNING nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.390 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.392 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.399 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.400 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.403 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.403 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.403 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.405 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:45:25 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:45:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:45:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3855621067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.856 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.881 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:25 np0005466030 nova_compute[230518]: 2025-10-02 12:45:25.885 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:25.941 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.092 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.112 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.112 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Image id 423b8b5f-aab8-418b-8fad-d82c90818bdd yields fingerprint 472c3cad2e339908bc4a8cea12fc22c04fcd93b6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.113 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): checking#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.113 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.114 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.115 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] 7621a774-e0bc-4f4f-b900-c3608dd6835a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.115 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.115 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.115 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Active base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.115 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Removable base files: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.116 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.116 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.116 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.116 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.116 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.945 2 INFO nova.virt.libvirt.driver [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deleting instance files /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a_del#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.946 2 INFO nova.virt.libvirt.driver [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deletion of /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a_del complete#033[00m
Oct  2 08:45:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:45:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3442325023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.984 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.986 2 DEBUG nova.virt.libvirt.vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.987 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.988 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:26 np0005466030 nova_compute[230518]: 2025-10-02 12:45:26.989 2 DEBUG nova.objects.instance [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.041 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <uuid>c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</uuid>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <name>instance-0000007b</name>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1940292897</nova:name>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:45:25</nova:creationTime>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <nova:port uuid="241d570e-8eb4-4d2a-986b-b37fbcb780a9">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <entry name="serial">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <entry name="uuid">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:2d:9b:0c"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <target dev="tap241d570e-8e"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/console.log" append="off"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:45:27 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:45:27 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:45:27 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:45:27 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Preparing to wait for external event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.043 2 DEBUG nova.virt.libvirt.vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.043 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.044 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.044 2 DEBUG os_vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap241d570e-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap241d570e-8e, col_values=(('external_ids', {'iface-id': '241d570e-8eb4-4d2a-986b-b37fbcb780a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:9b:0c', 'vm-uuid': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:27 np0005466030 NetworkManager[44960]: <info>  [1759409127.0515] manager: (tap241d570e-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.058 2 INFO os_vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e')#033[00m
Oct  2 08:45:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.092 2 INFO nova.compute.manager [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 8.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.093 2 DEBUG oslo.service.loopingcall [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.093 2 DEBUG nova.compute.manager [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.094 2 DEBUG nova.network.neutron [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.276 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.276 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.276 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:2d:9b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.277 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Using config drive#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.297 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.498 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409112.4961956, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.499 2 INFO nova.compute.manager [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.621 2 DEBUG nova.compute.manager [None req-d8345a56-da49-44d6-94c8-0be86c203185 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.678 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating config drive at /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.683 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptw3gwjbt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.709 2 DEBUG nova.network.neutron [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.709 2 DEBUG nova.network.neutron [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.793 2 DEBUG oslo_concurrency.lockutils [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.815 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptw3gwjbt" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.840 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:27 np0005466030 nova_compute[230518]: 2025-10-02 12:45:27.844 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.484 2 DEBUG nova.network.neutron [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.519 2 INFO nova.compute.manager [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 1.43 seconds to deallocate network for instance.#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.567 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.568 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.633 2 DEBUG oslo_concurrency.processutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.875 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.877 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deleting local config drive /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config because it was imported into RBD.#033[00m
Oct  2 08:45:28 np0005466030 kernel: tap241d570e-8e: entered promiscuous mode
Oct  2 08:45:28 np0005466030 NetworkManager[44960]: <info>  [1759409128.9330] manager: (tap241d570e-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Oct  2 08:45:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:28Z|00499|binding|INFO|Claiming lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 for this chassis.
Oct  2 08:45:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:28Z|00500|binding|INFO|241d570e-8eb4-4d2a-986b-b37fbcb780a9: Claiming fa:16:3e:2d:9b:0c 10.100.0.11
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.945 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.946 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis#033[00m
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.947 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9#033[00m
Oct  2 08:45:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:28Z|00501|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 ovn-installed in OVS
Oct  2 08:45:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:28Z|00502|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 up in Southbound
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:28 np0005466030 nova_compute[230518]: 2025-10-02 12:45:28.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.969 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79133cd9-539d-4cd3-81d5-ca5e3f88a372]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.970 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:45:28 np0005466030 systemd-udevd[277306]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.973 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.973 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c59dfd8b-d351-4504-8457-21e574e5edfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.974 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb686e15-190b-4e80-99e1-d25ddc7bc50d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:28 np0005466030 systemd-machined[188247]: New machine qemu-59-instance-0000007b.
Oct  2 08:45:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.987 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[e63eeda1-4f3c-4945-af0d-4d230208c31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:28 np0005466030 NetworkManager[44960]: <info>  [1759409128.9910] device (tap241d570e-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:45:28 np0005466030 NetworkManager[44960]: <info>  [1759409128.9925] device (tap241d570e-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:45:28 np0005466030 systemd[1]: Started Virtual Machine qemu-59-instance-0000007b.
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.014 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[80229201-eafb-4983-ac7f-cd0d0797256a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.056 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d172a3-ad1b-40b6-bab8-614b15e4401b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 NetworkManager[44960]: <info>  [1759409129.0680] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.070 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d12cee5-c072-46cd-acf5-6c62256df4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.112 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca7b83d-4a05-47ee-b3ce-b29498a261cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.115 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a52b4197-74b5-4a1f-a8cf-a4e1035538fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 NetworkManager[44960]: <info>  [1759409129.1440] device (tapf3934261-b0): carrier: link connected
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.153 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9425eca5-4c12-4a23-bf0f-c8fdb8cdd153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:45:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3675489831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.176 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b779b26d-9a0c-43ab-9d74-effa826084c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699270, 'reachable_time': 29481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277342, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.191 2 DEBUG nova.compute.manager [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.192 2 DEBUG oslo_concurrency.lockutils [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.193 2 DEBUG oslo_concurrency.lockutils [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.193 2 DEBUG oslo_concurrency.lockutils [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.194 2 DEBUG nova.compute.manager [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Processing event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11a1c233-5a4c-4f1c-b128-a4b06c27b970]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699270, 'tstamp': 699270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277357, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.207 2 DEBUG oslo_concurrency.processutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.213 2 DEBUG nova.compute.provider_tree [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.219 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3897a1a5-65bf-4ca2-93e1-70e0daf9a2f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699270, 'reachable_time': 29481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277359, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.253 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1e3026-080d-4461-b880-86dc32567c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.273 2 DEBUG nova.scheduler.client.report [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.306 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[07f43dc9-7b53-49d0-8273-37be13cc0799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.308 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.308 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.308 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.309 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:29 np0005466030 NetworkManager[44960]: <info>  [1759409129.3112] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct  2 08:45:29 np0005466030 kernel: tapf3934261-b0: entered promiscuous mode
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.318 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.320 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:45:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:29Z|00503|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.322 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7066670-db1d-4a12-825c-d2b417d47fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.323 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:45:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.324 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.356 2 INFO nova.scheduler.client.report [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocations for instance 7621a774-e0bc-4f4f-b900-c3608dd6835a#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.498 2 DEBUG nova.compute.manager [req-f8302356-0969-4975-ad45-cc8df68875f1 req-66c9cf76-c79c-4ea8-bf83-e6afbee318dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-deleted-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.506 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:29 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.789 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409129.7892838, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.790 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.791 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.794 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:45:29 np0005466030 podman[277416]: 2025-10-02 12:45:29.70786606 +0000 UTC m=+0.029881981 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.798 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance spawned successfully.#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.798 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.817 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.823 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.827 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.828 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.828 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.828 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.829 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.829 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.871 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.871 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409129.7894123, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.871 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.897 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.900 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409129.793948, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.900 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.904 2 INFO nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 10.09 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.904 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.930 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.933 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.955 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.970 2 INFO nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 11.36 seconds to build instance.#033[00m
Oct  2 08:45:29 np0005466030 nova_compute[230518]: 2025-10-02 12:45:29.998 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:30 np0005466030 podman[277416]: 2025-10-02 12:45:30.275451077 +0000 UTC m=+0.597467008 container create 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:45:30 np0005466030 systemd[1]: Started libpod-conmon-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3.scope.
Oct  2 08:45:30 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:45:30 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0cc289efa4928f4df71ab8c55274a5960f97d161eaafb88ba0444aad755767/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:45:30 np0005466030 podman[277416]: 2025-10-02 12:45:30.631234764 +0000 UTC m=+0.953250695 container init 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:45:30 np0005466030 podman[277416]: 2025-10-02 12:45:30.637721599 +0000 UTC m=+0.959737500 container start 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:45:30 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : New worker (277437) forked
Oct  2 08:45:30 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : Loading success.
Oct  2 08:45:31 np0005466030 nova_compute[230518]: 2025-10-02 12:45:31.293 2 DEBUG nova.compute.manager [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:31 np0005466030 nova_compute[230518]: 2025-10-02 12:45:31.294 2 DEBUG oslo_concurrency.lockutils [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:31 np0005466030 nova_compute[230518]: 2025-10-02 12:45:31.294 2 DEBUG oslo_concurrency.lockutils [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:31 np0005466030 nova_compute[230518]: 2025-10-02 12:45:31.294 2 DEBUG oslo_concurrency.lockutils [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:31 np0005466030 nova_compute[230518]: 2025-10-02 12:45:31.295 2 DEBUG nova.compute.manager [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:31 np0005466030 nova_compute[230518]: 2025-10-02 12:45:31.295 2 WARNING nova.compute.manager [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:45:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:31 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:32 np0005466030 nova_compute[230518]: 2025-10-02 12:45:32.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:33 np0005466030 nova_compute[230518]: 2025-10-02 12:45:33.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:33.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:33 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:33.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.182 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409119.1768508, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.183 2 INFO nova.compute.manager [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.211 2 DEBUG nova.compute.manager [None req-71181f9b-aca8-4b1f-b9f3-9bc5b9968ea5 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.262 2 DEBUG nova.compute.manager [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.262 2 DEBUG nova.compute.manager [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.262 2 DEBUG oslo_concurrency.lockutils [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.263 2 DEBUG oslo_concurrency.lockutils [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.263 2 DEBUG nova.network.neutron [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:45:34 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:34Z|00504|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:45:34 np0005466030 nova_compute[230518]: 2025-10-02 12:45:34.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:35.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:35 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:35.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:35 np0005466030 nova_compute[230518]: 2025-10-02 12:45:35.873 2 DEBUG nova.network.neutron [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:45:35 np0005466030 nova_compute[230518]: 2025-10-02 12:45:35.874 2 DEBUG nova.network.neutron [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:35 np0005466030 nova_compute[230518]: 2025-10-02 12:45:35.894 2 DEBUG oslo_concurrency.lockutils [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:37 np0005466030 nova_compute[230518]: 2025-10-02 12:45:37.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Oct  2 08:45:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:37 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:38 np0005466030 nova_compute[230518]: 2025-10-02 12:45:38.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 39K writes, 154K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 39K writes, 14K syncs, 2.81 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9106 writes, 36K keys, 9106 commit groups, 1.0 writes per commit group, ingest: 36.49 MB, 0.06 MB/s#012Interval WAL: 9105 writes, 3449 syncs, 2.64 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:45:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:39.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:39 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:39.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:45:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:41.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:45:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:41 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:41.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:41 np0005466030 podman[277448]: 2025-10-02 12:45:41.884656646 +0000 UTC m=+0.115173523 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:45:41 np0005466030 podman[277447]: 2025-10-02 12:45:41.893247696 +0000 UTC m=+0.135028206 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  2 08:45:42 np0005466030 nova_compute[230518]: 2025-10-02 12:45:42.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:43 np0005466030 nova_compute[230518]: 2025-10-02 12:45:43.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:43 np0005466030 nova_compute[230518]: 2025-10-02 12:45:43.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:43Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:9b:0c 10.100.0.11
Oct  2 08:45:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:43Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:9b:0c 10.100.0.11
Oct  2 08:45:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:43.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:45.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:45 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:45.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:47 np0005466030 nova_compute[230518]: 2025-10-02 12:45:47.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:47 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:45:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:45:48 np0005466030 nova_compute[230518]: 2025-10-02 12:45:48.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:48 np0005466030 nova_compute[230518]: 2025-10-02 12:45:48.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:49.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:49 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:49 np0005466030 nova_compute[230518]: 2025-10-02 12:45:49.984 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:49 np0005466030 nova_compute[230518]: 2025-10-02 12:45:49.985 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.005 2 DEBUG nova.objects.instance [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.048 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.232 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.233 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.234 2 INFO nova.compute.manager [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attaching volume 9405efbb-874d-467a-93c3-bbd76870d422 to /dev/vdb#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.359 2 DEBUG os_brick.utils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.362 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.379 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.379 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[2275a5af-1526-4dfe-91cb-27374444f029]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.380 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.392 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.392 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6c413a-5869-43c6-9064-c85ea98f1942]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.396 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.407 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.407 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5d080bf8-d121-4c44-bdd1-53060063c600]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.409 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[614bccfe-ef19-4ea5-9f6f-3f8fe7389d52]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.411 2 DEBUG oslo_concurrency.processutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.464 2 DEBUG oslo_concurrency.processutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "nvme version" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.469 2 DEBUG os_brick.initiator.connectors.lightos [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.470 2 DEBUG os_brick.initiator.connectors.lightos [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.471 2 DEBUG os_brick.utils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] <== get_connector_properties: return (111ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:45:50 np0005466030 nova_compute[230518]: 2025-10-02 12:45:50.472 2 DEBUG nova.virt.block_device [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating existing volume attachment record: 8795165d-376d-4129-82b5-208bafc1acc5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.235 2 DEBUG nova.objects.instance [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.256 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting to attach volume 9405efbb-874d-467a-93c3-bbd76870d422 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.260 2 DEBUG nova.virt.libvirt.guest [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:45:51 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-9405efbb-874d-467a-93c3-bbd76870d422">
Oct  2 08:45:51 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 08:45:51 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:  </auth>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:45:51 np0005466030 nova_compute[230518]:  <serial>9405efbb-874d-467a-93c3-bbd76870d422</serial>
Oct  2 08:45:51 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:45:51 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.463 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.464 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.464 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.464 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:2d:9b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:45:51 np0005466030 nova_compute[230518]: 2025-10-02 12:45:51.664 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:51.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:51.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:52 np0005466030 nova_compute[230518]: 2025-10-02 12:45:52.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:52 np0005466030 nova_compute[230518]: 2025-10-02 12:45:52.918 2 INFO nova.compute.manager [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Rescuing#033[00m
Oct  2 08:45:52 np0005466030 nova_compute[230518]: 2025-10-02 12:45:52.918 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:45:52 np0005466030 nova_compute[230518]: 2025-10-02 12:45:52.918 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:45:52 np0005466030 nova_compute[230518]: 2025-10-02 12:45:52.918 2 DEBUG nova.network.neutron [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:45:53 np0005466030 nova_compute[230518]: 2025-10-02 12:45:53.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:53 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:53.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:53.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:53 np0005466030 podman[277519]: 2025-10-02 12:45:53.831310972 +0000 UTC m=+0.065300625 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:45:53 np0005466030 podman[277518]: 2025-10-02 12:45:53.840136699 +0000 UTC m=+0.073409759 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:45:54 np0005466030 nova_compute[230518]: 2025-10-02 12:45:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:45:54 np0005466030 nova_compute[230518]: 2025-10-02 12:45:54.863 2 DEBUG nova.network.neutron [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:45:54 np0005466030 nova_compute[230518]: 2025-10-02 12:45:54.880 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:45:55 np0005466030 nova_compute[230518]: 2025-10-02 12:45:55.132 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:45:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:45:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:55.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:45:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:55 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:55.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:57 np0005466030 nova_compute[230518]: 2025-10-02 12:45:57.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:45:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:57 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:45:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:45:57 np0005466030 kernel: tap241d570e-8e (unregistering): left promiscuous mode
Oct  2 08:45:57 np0005466030 NetworkManager[44960]: <info>  [1759409157.8590] device (tap241d570e-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:45:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:57Z|00505|binding|INFO|Releasing lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 from this chassis (sb_readonly=0)
Oct  2 08:45:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:57Z|00506|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 down in Southbound
Oct  2 08:45:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:45:57Z|00507|binding|INFO|Removing iface tap241d570e-8e ovn-installed in OVS
Oct  2 08:45:57 np0005466030 nova_compute[230518]: 2025-10-02 12:45:57.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:57 np0005466030 nova_compute[230518]: 2025-10-02 12:45:57.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.879 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:45:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.881 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:45:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.884 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:45:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.886 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fcad731b-9944-4bf3-a099-2c924be1e448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:45:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.887 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore#033[00m
Oct  2 08:45:57 np0005466030 nova_compute[230518]: 2025-10-02 12:45:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:57 np0005466030 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct  2 08:45:57 np0005466030 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007b.scope: Consumed 14.416s CPU time.
Oct  2 08:45:57 np0005466030 systemd-machined[188247]: Machine qemu-59-instance-0000007b terminated.
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.148 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.155 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.156 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'numa_topology' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.176 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting rescue#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.177 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.182 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.183 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating image(s)#033[00m
Oct  2 08:45:58 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : haproxy version is 2.8.14-c23fe91
Oct  2 08:45:58 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : path to executable is /usr/sbin/haproxy
Oct  2 08:45:58 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [WARNING]  (277435) : Exiting Master process...
Oct  2 08:45:58 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [WARNING]  (277435) : Exiting Master process...
Oct  2 08:45:58 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [ALERT]    (277435) : Current worker (277437) exited with code 143 (Terminated)
Oct  2 08:45:58 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [WARNING]  (277435) : All workers exited. Exiting... (0)
Oct  2 08:45:58 np0005466030 systemd[1]: libpod-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3.scope: Deactivated successfully.
Oct  2 08:45:58 np0005466030 podman[277580]: 2025-10-02 12:45:58.536698771 +0000 UTC m=+0.556609174 container died 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.866 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.871 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.881 2 DEBUG nova.compute.manager [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.881 2 DEBUG oslo_concurrency.lockutils [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.882 2 DEBUG oslo_concurrency.lockutils [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.882 2 DEBUG oslo_concurrency.lockutils [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.883 2 DEBUG nova.compute.manager [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:45:58 np0005466030 nova_compute[230518]: 2025-10-02 12:45:58.883 2 WARNING nova.compute.manager [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.088 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.112 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.116 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:59 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3-userdata-shm.mount: Deactivated successfully.
Oct  2 08:45:59 np0005466030 systemd[1]: var-lib-containers-storage-overlay-df0cc289efa4928f4df71ab8c55274a5960f97d161eaafb88ba0444aad755767-merged.mount: Deactivated successfully.
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.197 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.199 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.200 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.200 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:45:59 np0005466030 podman[277580]: 2025-10-02 12:45:59.326084933 +0000 UTC m=+1.345995346 container cleanup 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:45:59 np0005466030 systemd[1]: libpod-conmon-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3.scope: Deactivated successfully.
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.359 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:45:59 np0005466030 nova_compute[230518]: 2025-10-02 12:45:59.365 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:45:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:45:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:45:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:59.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:45:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:45:59 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:59.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:00 np0005466030 podman[277690]: 2025-10-02 12:46:00.206329782 +0000 UTC m=+0.837962231 container remove 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.217 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dae729a-a293-44bf-a8bd-de0f7303109a]: (4, ('Thu Oct  2 12:45:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3)\n95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3\nThu Oct  2 12:45:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3)\n95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.219 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[50eca9af-2f5d-4701-b3fb-a34f94314950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.221 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:00 np0005466030 kernel: tapf3934261-b0: left promiscuous mode
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b0d2d6-5d08-4134-9d8e-2ae798dce630]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.282 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[75da7018-e895-43a7-aee8-7b662084aded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.283 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d98974af-7fc7-4aca-a4f7-2e6fc15e02f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.300 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ce9866-5207-4c87-a1ca-2374fe09ef54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699259, 'reachable_time': 35117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277733, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.302 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:46:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.303 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fe28ea92-085d-4673-b465-a354184aea58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:00 np0005466030 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.542 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.543 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.559 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.560 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start _get_guest_xml network_info=[{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:2d:9b:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.560 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.579 2 DEBUG nova.compute.manager [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG oslo_concurrency.lockutils [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG oslo_concurrency.lockutils [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG oslo_concurrency.lockutils [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG nova.compute.manager [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.581 2 WARNING nova.compute.manager [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.581 2 WARNING nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.586 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.587 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.589 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.590 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.590 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:00 np0005466030 nova_compute[230518]: 2025-10-02 12:46:00.612 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.065 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/199098813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.135 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.137 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.167 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.168 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.168 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.169 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.169 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3014393884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.596 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.597 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/459826867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.630 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.697 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:46:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:01.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:01 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:01.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.858 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.860 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4363MB free_disk=20.781219482421875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.860 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.861 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.981 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.981 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:46:01 np0005466030 nova_compute[230518]: 2025-10-02 12:46:01.982 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:46:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:46:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/626934885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.130 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.132 2 DEBUG nova.virt.libvirt.vif [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:45:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:2d:9b:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.132 2 DEBUG nova.network.os_vif_util [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:2d:9b:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.133 2 DEBUG nova.network.os_vif_util [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.134 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.135 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:46:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.165 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <uuid>c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</uuid>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <name>instance-0000007b</name>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1940292897</nova:name>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:46:00</nova:creationTime>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <nova:port uuid="241d570e-8eb4-4d2a-986b-b37fbcb780a9">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <entry name="serial">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <entry name="uuid">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:2d:9b:0c"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <target dev="tap241d570e-8e"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/console.log" append="off"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:46:02 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:46:02 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:46:02 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:46:02 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.172 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.185 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.185 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.199 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.222 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.245 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.245 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.246 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.246 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:2d:9b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.246 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Using config drive#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.269 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.314 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.347 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'keypairs' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811187702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.783 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.789 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.818 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.859 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.860 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.918 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating config drive at /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue#033[00m
Oct  2 08:46:02 np0005466030 nova_compute[230518]: 2025-10-02 12:46:02.931 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylgsn0hh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.094 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylgsn0hh" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.126 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.130 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:03.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.960 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.830s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:03 np0005466030 nova_compute[230518]: 2025-10-02 12:46:03.961 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deleting local config drive /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:46:04 np0005466030 kernel: tap241d570e-8e: entered promiscuous mode
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.0131] manager: (tap241d570e-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct  2 08:46:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:04Z|00508|binding|INFO|Claiming lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 for this chassis.
Oct  2 08:46:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:04Z|00509|binding|INFO|241d570e-8eb4-4d2a-986b-b37fbcb780a9: Claiming fa:16:3e:2d:9b:0c 10.100.0.11
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 systemd-machined[188247]: New machine qemu-60-instance-0000007b.
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.0443] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.0447] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.045 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.046 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.047 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9#033[00m
Oct  2 08:46:04 np0005466030 systemd[1]: Started Virtual Machine qemu-60-instance-0000007b.
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.058 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac6211-c60d-4153-8d73-c1db091e3a23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.059 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.063 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.063 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c439f6f2-d4b7-43c8-99f4-3c705256ead5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 systemd-udevd[277918]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.066 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2165e13e-3279-40d8-82fd-2d137c2e246b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.0786] device (tap241d570e-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.077 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[026e9c50-05f0-4cc1-be68-0949fabc473b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.0803] device (tap241d570e-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.104 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b71b21-814a-4c34-9509-f5bdabaabc3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.136 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48946bf0-7434-4bb9-a692-3aaca26ba811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.142 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcc12a2-c051-45e3-8f5b-ae0149626abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.1438] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.174 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bfa173-4350-4c31-b73e-fe3aeb30a2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.177 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[86b32976-8c44-49e6-af7c-6443d50fa501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.2069] device (tapf3934261-b0): carrier: link connected
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.212 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[42e8b0d0-1a05-44a0-bd01-1a4b8a55beac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.235 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[baf8e688-ad7c-405b-8d83-6028700623a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702776, 'reachable_time': 30594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277950, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.255 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2f0354-844e-4d49-b14b-17cd20d39531]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702776, 'tstamp': 702776}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277951, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:04Z|00510|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 ovn-installed in OVS
Oct  2 08:46:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:04Z|00511|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 up in Southbound
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.274 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[13d31c08-afa8-4b3f-a4c3-90fd175a3b20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702776, 'reachable_time': 30594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277952, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.310 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[84455169-edae-4a0d-9b22-716765a95926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.372 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f10dcb16-08f1-4e7d-90f7-38a11169c81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.374 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.374 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.375 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 kernel: tapf3934261-b0: entered promiscuous mode
Oct  2 08:46:04 np0005466030 NetworkManager[44960]: <info>  [1759409164.3773] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.379 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:04Z|00512|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.393 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.394 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12a2ae3a-a795-44c3-ae71-b2bb5a78ea2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.395 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:46:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.396 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.532 2 DEBUG nova.compute.manager [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.533 2 DEBUG oslo_concurrency.lockutils [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.533 2 DEBUG oslo_concurrency.lockutils [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.534 2 DEBUG oslo_concurrency.lockutils [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.534 2 DEBUG nova.compute.manager [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.534 2 WARNING nova.compute.manager [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.842 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:04 np0005466030 nova_compute[230518]: 2025-10-02 12:46:04.843 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:04 np0005466030 podman[277991]: 2025-10-02 12:46:04.819771779 +0000 UTC m=+0.034020120 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:46:05 np0005466030 podman[277991]: 2025-10-02 12:46:05.506497653 +0000 UTC m=+0.720746004 container create eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:46:05 np0005466030 nova_compute[230518]: 2025-10-02 12:46:05.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:05 np0005466030 nova_compute[230518]: 2025-10-02 12:46:05.755 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:46:05 np0005466030 nova_compute[230518]: 2025-10-02 12:46:05.756 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:05 np0005466030 nova_compute[230518]: 2025-10-02 12:46:05.756 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:05 np0005466030 nova_compute[230518]: 2025-10-02 12:46:05.756 2 INFO nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct  2 08:46:05 np0005466030 nova_compute[230518]: 2025-10-02 12:46:05.756 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:05.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:05 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:05.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:05 np0005466030 systemd[1]: Started libpod-conmon-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9.scope.
Oct  2 08:46:05 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:46:05 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0fe032c883f01cbe2c0fc7a6c92461065f29d5017d7d4b7bb4ad0ea07ca86a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:46:05 np0005466030 podman[277991]: 2025-10-02 12:46:05.977635438 +0000 UTC m=+1.191883809 container init eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:46:05 np0005466030 podman[277991]: 2025-10-02 12:46:05.983129221 +0000 UTC m=+1.197377562 container start eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 08:46:06 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : New worker (278065) forked
Oct  2 08:46:06 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : Loading success.
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.181 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.181 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409166.1806834, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.182 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.187 2 DEBUG nova.compute.manager [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.210 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.213 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.306 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.306 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409166.1821876, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.307 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.359 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.362 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.752 2 DEBUG nova.compute.manager [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.753 2 DEBUG oslo_concurrency.lockutils [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.753 2 DEBUG oslo_concurrency.lockutils [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.754 2 DEBUG oslo_concurrency.lockutils [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.754 2 DEBUG nova.compute.manager [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:06 np0005466030 nova_compute[230518]: 2025-10-02 12:46:06.755 2 WARNING nova.compute.manager [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:46:07 np0005466030 nova_compute[230518]: 2025-10-02 12:46:07.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:07.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:07 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:07.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.080 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.081 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.081 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:46:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.925 2 INFO nova.compute.manager [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Unrescuing#033[00m
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.926 2 DEBUG oslo_concurrency.lockutils [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.927 2 DEBUG oslo_concurrency.lockutils [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:08 np0005466030 nova_compute[230518]: 2025-10-02 12:46:08.927 2 DEBUG nova.network.neutron [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:46:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:09.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:09.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.097 2 DEBUG nova.network.neutron [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.114 2 DEBUG oslo_concurrency.lockutils [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.115 2 DEBUG nova.objects.instance [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:10 np0005466030 kernel: tap241d570e-8e (unregistering): left promiscuous mode
Oct  2 08:46:10 np0005466030 NetworkManager[44960]: <info>  [1759409170.1871] device (tap241d570e-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00513|binding|INFO|Releasing lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 from this chassis (sb_readonly=0)
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00514|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 down in Southbound
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00515|binding|INFO|Removing iface tap241d570e-8e ovn-installed in OVS
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.205 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.207 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.209 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.211 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5de888fa-505f-470e-a06f-b64a794e4504]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.211 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct  2 08:46:10 np0005466030 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007b.scope: Consumed 5.382s CPU time.
Oct  2 08:46:10 np0005466030 systemd-machined[188247]: Machine qemu-60-instance-0000007b terminated.
Oct  2 08:46:10 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : haproxy version is 2.8.14-c23fe91
Oct  2 08:46:10 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : path to executable is /usr/sbin/haproxy
Oct  2 08:46:10 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [WARNING]  (278063) : Exiting Master process...
Oct  2 08:46:10 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [ALERT]    (278063) : Current worker (278065) exited with code 143 (Terminated)
Oct  2 08:46:10 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [WARNING]  (278063) : All workers exited. Exiting... (0)
Oct  2 08:46:10 np0005466030 systemd[1]: libpod-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9.scope: Deactivated successfully.
Oct  2 08:46:10 np0005466030 podman[278348]: 2025-10-02 12:46:10.363331033 +0000 UTC m=+0.050725355 container died eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.381 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.381 2 DEBUG nova.objects.instance [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'numa_topology' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:10 np0005466030 systemd[1]: var-lib-containers-storage-overlay-4b0fe032c883f01cbe2c0fc7a6c92461065f29d5017d7d4b7bb4ad0ea07ca86a-merged.mount: Deactivated successfully.
Oct  2 08:46:10 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9-userdata-shm.mount: Deactivated successfully.
Oct  2 08:46:10 np0005466030 podman[278348]: 2025-10-02 12:46:10.419729987 +0000 UTC m=+0.107124309 container cleanup eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 08:46:10 np0005466030 systemd[1]: libpod-conmon-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9.scope: Deactivated successfully.
Oct  2 08:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9959 writes, 51K keys, 9959 commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 9959 writes, 9959 syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1653 writes, 8307 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 16.61 MB, 0.03 MB/s#012Interval WAL: 1653 writes, 1653 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.5      0.84              0.17        30    0.028       0      0       0.0       0.0#012  L6      1/0    8.83 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4    131.5    110.2      2.47              0.78        29    0.085    172K    16K       0.0       0.0#012 Sum      1/0    8.83 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     98.0    100.6      3.31              0.96        59    0.056    172K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     97.1     97.1      0.75              0.25        12    0.063     45K   3138       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    131.5    110.2      2.47              0.78        29    0.085    172K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.7      0.84              0.17        29    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.060, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.33 GB write, 0.09 MB/s write, 0.32 GB read, 0.09 MB/s read, 3.3 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 35.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000259 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2075,34.45 MB,11.3313%) FilterBlock(59,493.48 KB,0.158526%) IndexBlock(59,849.36 KB,0.272846%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:46:10 np0005466030 podman[278389]: 2025-10-02 12:46:10.487970723 +0000 UTC m=+0.039990228 container remove eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:46:10 np0005466030 kernel: tap241d570e-8e: entered promiscuous mode
Oct  2 08:46:10 np0005466030 systemd-udevd[278328]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:46:10 np0005466030 NetworkManager[44960]: <info>  [1759409170.4952] manager: (tap241d570e-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00516|binding|INFO|Claiming lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 for this chassis.
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00517|binding|INFO|241d570e-8eb4-4d2a-986b-b37fbcb780a9: Claiming fa:16:3e:2d:9b:0c 10.100.0.11
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.497 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8440667-92e9-46d6-8efd-cd8a1cbb61e3]: (4, ('Thu Oct  2 12:46:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9)\neb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9\nThu Oct  2 12:46:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9)\neb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.500 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[439ab5c7-ee61-4959-b5da-c5331ef98e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.501 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.504 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 NetworkManager[44960]: <info>  [1759409170.5129] device (tap241d570e-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:46:10 np0005466030 NetworkManager[44960]: <info>  [1759409170.5143] device (tap241d570e-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00518|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 ovn-installed in OVS
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00519|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 up in Southbound
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 kernel: tapf3934261-b0: left promiscuous mode
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a76a03-759d-47fd-8085-ce9793ea2801]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 systemd-machined[188247]: New machine qemu-61-instance-0000007b.
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.550 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d388f3b3-6a0b-44a5-a0bf-7bb4e3df4870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.555 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7877c6-8162-4e5b-bd7e-5609cd870054]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 systemd[1]: Started Virtual Machine qemu-61-instance-0000007b.
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.572 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bcb38c-88d9-462a-9084-d2f305517cb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702768, 'reachable_time': 24456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278415, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.575 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.575 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ac5de-13b1-48f7-9462-b0dcd854bcf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.576 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.577 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.591 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b63b904-6bd6-450e-b409-e10c9349b154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.592 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.594 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.594 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf500269-b555-4736-bb8c-5b64300cc482]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.595 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46b5b39e-1c13-41d3-9771-63263909aacf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.612 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[28ac9eb6-a916-4c9e-b05f-48320d205bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.640 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[87323c76-db44-4c27-a7f0-0e5895ee3fba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.651 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.684 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8110b395-b8e5-4a34-a4e9-8e18a5751182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe03ed5-0ab5-4769-b12d-cf0b54f3c5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 NetworkManager[44960]: <info>  [1759409170.6940] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.726 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad1932d-10b7-401c-a1a9-b52b1c0dd734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.730 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3c109a81-6dec-4f93-b243-bf952f038eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 NetworkManager[44960]: <info>  [1759409170.7624] device (tapf3934261-b0): carrier: link connected
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.771 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[660fec1f-cf41-4d76-9023-d5f30f0071a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.789 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[865b264d-d8db-4198-a1ee-2dc7c578e9e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703432, 'reachable_time': 27439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278446, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.805 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2405b55-4a55-46b1-8290-048eff0e8a9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703432, 'tstamp': 703432}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278447, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.825 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[acc89527-d6a3-4feb-b359-cc22773f2f39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703432, 'reachable_time': 27439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278448, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd5c536-7e7c-43f0-a2da-5c9b05756f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.903 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[55bb4263-4c94-46a7-aaa5-13c002185ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.904 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.904 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.905 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:10 np0005466030 kernel: tapf3934261-b0: entered promiscuous mode
Oct  2 08:46:10 np0005466030 NetworkManager[44960]: <info>  [1759409170.9071] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.911 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:10Z|00520|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.913 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.914 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a52853c8-70f1-4e43-9917-0877c73891db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.915 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:46:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.915 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:46:10 np0005466030 nova_compute[230518]: 2025-10-02 12:46:10.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.034 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.035 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.035 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.036 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.036 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.036 2 WARNING nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.038 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.038 2 WARNING nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.038 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.038 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.039 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.039 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.039 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:11 np0005466030 nova_compute[230518]: 2025-10-02 12:46:11.039 2 WARNING nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:46:11 np0005466030 podman[278481]: 2025-10-02 12:46:11.263387926 +0000 UTC m=+0.045202753 container create 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:46:11 np0005466030 systemd[1]: Started libpod-conmon-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d.scope.
Oct  2 08:46:11 np0005466030 podman[278481]: 2025-10-02 12:46:11.241030293 +0000 UTC m=+0.022845140 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:46:11 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:46:11 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382483586f98dc9c7e63f8af29bc5c60af48f2c266cb3884274ad336cebfb902/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:46:11 np0005466030 podman[278481]: 2025-10-02 12:46:11.367996135 +0000 UTC m=+0.149810982 container init 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:46:11 np0005466030 podman[278481]: 2025-10-02 12:46:11.373104656 +0000 UTC m=+0.154919483 container start 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:46:11 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : New worker (278502) forked
Oct  2 08:46:11 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : Loading success.
Oct  2 08:46:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:11.553 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:46:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:11 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.270 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.271 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409172.269763, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.271 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:46:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.342 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.345 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.374 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.374 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409172.2734544, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.375 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Started (Lifecycle Event)#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.405 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.409 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:46:12 np0005466030 nova_compute[230518]: 2025-10-02 12:46:12.431 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Oct  2 08:46:12 np0005466030 podman[278590]: 2025-10-02 12:46:12.828729187 +0000 UTC m=+0.077050044 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 08:46:12 np0005466030 podman[278589]: 2025-10-02 12:46:12.830479103 +0000 UTC m=+0.083348653 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:46:13 np0005466030 nova_compute[230518]: 2025-10-02 12:46:13.115 2 DEBUG nova.compute.manager [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:13 np0005466030 nova_compute[230518]: 2025-10-02 12:46:13.116 2 DEBUG oslo_concurrency.lockutils [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:13 np0005466030 nova_compute[230518]: 2025-10-02 12:46:13.116 2 DEBUG oslo_concurrency.lockutils [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:13 np0005466030 nova_compute[230518]: 2025-10-02 12:46:13.116 2 DEBUG oslo_concurrency.lockutils [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:13 np0005466030 nova_compute[230518]: 2025-10-02 12:46:13.117 2 DEBUG nova.compute.manager [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:13 np0005466030 nova_compute[230518]: 2025-10-02 12:46:13.117 2 WARNING nova.compute.manager [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.#033[00m
Oct  2 08:46:13 np0005466030 nova_compute[230518]: 2025-10-02 12:46:13.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:13.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:13.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:14 np0005466030 nova_compute[230518]: 2025-10-02 12:46:14.254 2 DEBUG nova.compute.manager [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:46:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:46:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:46:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:46:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:15.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:46:17 np0005466030 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG nova.compute.manager [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:17 np0005466030 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG nova.compute.manager [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:17 np0005466030 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG oslo_concurrency.lockutils [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:17 np0005466030 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG oslo_concurrency.lockutils [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:17 np0005466030 nova_compute[230518]: 2025-10-02 12:46:17.044 2 DEBUG nova.network.neutron [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:17 np0005466030 nova_compute[230518]: 2025-10-02 12:46:17.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:46:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:46:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:46:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:46:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:17.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:18 np0005466030 nova_compute[230518]: 2025-10-02 12:46:18.065 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:18 np0005466030 nova_compute[230518]: 2025-10-02 12:46:18.546 2 DEBUG nova.network.neutron [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:18 np0005466030 nova_compute[230518]: 2025-10-02 12:46:18.546 2 DEBUG nova.network.neutron [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:18 np0005466030 nova_compute[230518]: 2025-10-02 12:46:18.563 2 DEBUG oslo_concurrency.lockutils [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:18 np0005466030 nova_compute[230518]: 2025-10-02 12:46:18.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:19 np0005466030 nova_compute[230518]: 2025-10-02 12:46:19.151 2 DEBUG nova.compute.manager [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:19 np0005466030 nova_compute[230518]: 2025-10-02 12:46:19.151 2 DEBUG nova.compute.manager [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:46:19 np0005466030 nova_compute[230518]: 2025-10-02 12:46:19.152 2 DEBUG oslo_concurrency.lockutils [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:46:19 np0005466030 nova_compute[230518]: 2025-10-02 12:46:19.152 2 DEBUG oslo_concurrency.lockutils [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:46:19 np0005466030 nova_compute[230518]: 2025-10-02 12:46:19.152 2 DEBUG nova.network.neutron [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:46:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:19.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:19 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:19.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:20.555 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:21 np0005466030 nova_compute[230518]: 2025-10-02 12:46:21.153 2 DEBUG nova.network.neutron [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:46:21 np0005466030 nova_compute[230518]: 2025-10-02 12:46:21.153 2 DEBUG nova.network.neutron [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:21 np0005466030 nova_compute[230518]: 2025-10-02 12:46:21.168 2 DEBUG oslo_concurrency.lockutils [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:46:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:21 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:21.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:22 np0005466030 nova_compute[230518]: 2025-10-02 12:46:22.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:23 np0005466030 nova_compute[230518]: 2025-10-02 12:46:23.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:24 np0005466030 podman[278686]: 2025-10-02 12:46:24.808801665 +0000 UTC m=+0.058561282 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:46:24 np0005466030 podman[278687]: 2025-10-02 12:46:24.838319864 +0000 UTC m=+0.086662626 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 08:46:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:25Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:9b:0c 10.100.0.11
Oct  2 08:46:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:25.943 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:25.945 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:27 np0005466030 nova_compute[230518]: 2025-10-02 12:46:27.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:27 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:27.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:27.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:28 np0005466030 nova_compute[230518]: 2025-10-02 12:46:28.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Oct  2 08:46:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:29.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:29 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:29.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Oct  2 08:46:31 np0005466030 nova_compute[230518]: 2025-10-02 12:46:31.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Oct  2 08:46:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:31.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:31.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:32 np0005466030 nova_compute[230518]: 2025-10-02 12:46:32.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.362 2 DEBUG oslo_concurrency.lockutils [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.362 2 DEBUG oslo_concurrency.lockutils [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.377 2 INFO nova.compute.manager [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Detaching volume 9405efbb-874d-467a-93c3-bbd76870d422#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.582 2 INFO nova.virt.block_device [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting to driver detach volume 9405efbb-874d-467a-93c3-bbd76870d422 from mountpoint /dev/vdb#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.590 2 DEBUG nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Attempting to detach device vdb from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.591 2 DEBUG nova.virt.libvirt.guest [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-9405efbb-874d-467a-93c3-bbd76870d422">
Oct  2 08:46:33 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <serial>9405efbb-874d-467a-93c3-bbd76870d422</serial>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:46:33 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.599 2 INFO nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully detached device vdb from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the persistent domain config.#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.600 2 DEBUG nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.600 2 DEBUG nova.virt.libvirt.guest [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-9405efbb-874d-467a-93c3-bbd76870d422">
Oct  2 08:46:33 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <serial>9405efbb-874d-467a-93c3-bbd76870d422</serial>
Oct  2 08:46:33 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:46:33 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:46:33 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.707 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409193.7073429, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.709 2 DEBUG nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.710 2 INFO nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully detached device vdb from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the live domain config.#033[00m
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:33.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:33 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:33.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:33 np0005466030 nova_compute[230518]: 2025-10-02 12:46:33.964 2 DEBUG nova.objects.instance [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:34 np0005466030 nova_compute[230518]: 2025-10-02 12:46:34.023 2 DEBUG oslo_concurrency.lockutils [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:35.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:35 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:35.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.079 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.122 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.122 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.123 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.123 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.123 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.124 2 INFO nova.compute.manager [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Terminating instance#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.126 2 DEBUG nova.compute.manager [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:46:36 np0005466030 kernel: tap241d570e-8e (unregistering): left promiscuous mode
Oct  2 08:46:36 np0005466030 NetworkManager[44960]: <info>  [1759409196.1906] device (tap241d570e-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:46:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:36Z|00521|binding|INFO|Releasing lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 from this chassis (sb_readonly=0)
Oct  2 08:46:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:36Z|00522|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 down in Southbound
Oct  2 08:46:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:46:36Z|00523|binding|INFO|Removing iface tap241d570e-8e ovn-installed in OVS
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.250 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.252 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.254 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.255 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72d30dd8-47dd-4305-aab1-279e98429396]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.256 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct  2 08:46:36 np0005466030 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007b.scope: Consumed 14.588s CPU time.
Oct  2 08:46:36 np0005466030 systemd-machined[188247]: Machine qemu-61-instance-0000007b terminated.
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.360 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.362 2 DEBUG nova.objects.instance [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.379 2 DEBUG nova.virt.libvirt.vif [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:46:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.380 2 DEBUG nova.network.os_vif_util [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.380 2 DEBUG nova.network.os_vif_util [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.381 2 DEBUG os_vif [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap241d570e-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.390 2 INFO os_vif [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e')#033[00m
Oct  2 08:46:36 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : haproxy version is 2.8.14-c23fe91
Oct  2 08:46:36 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : path to executable is /usr/sbin/haproxy
Oct  2 08:46:36 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [WARNING]  (278500) : Exiting Master process...
Oct  2 08:46:36 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [WARNING]  (278500) : Exiting Master process...
Oct  2 08:46:36 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [ALERT]    (278500) : Current worker (278502) exited with code 143 (Terminated)
Oct  2 08:46:36 np0005466030 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [WARNING]  (278500) : All workers exited. Exiting... (0)
Oct  2 08:46:36 np0005466030 systemd[1]: libpod-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d.scope: Deactivated successfully.
Oct  2 08:46:36 np0005466030 podman[278754]: 2025-10-02 12:46:36.416438513 +0000 UTC m=+0.060199703 container died 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:46:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d-userdata-shm.mount: Deactivated successfully.
Oct  2 08:46:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay-382483586f98dc9c7e63f8af29bc5c60af48f2c266cb3884274ad336cebfb902-merged.mount: Deactivated successfully.
Oct  2 08:46:36 np0005466030 podman[278754]: 2025-10-02 12:46:36.463653758 +0000 UTC m=+0.107414938 container cleanup 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:46:36 np0005466030 systemd[1]: libpod-conmon-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d.scope: Deactivated successfully.
Oct  2 08:46:36 np0005466030 podman[278811]: 2025-10-02 12:46:36.540144864 +0000 UTC m=+0.056676243 container remove 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.550 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca3f6a8-558f-4f66-82e8-05544935f211]: (4, ('Thu Oct  2 12:46:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d)\n9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d\nThu Oct  2 12:46:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d)\n9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.552 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68dde93a-ff10-4dbd-a7e3-8b21c96e809b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.554 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 kernel: tapf3934261-b0: left promiscuous mode
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.574 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[908f8615-6d05-4618-a7bc-0b61a917b792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.605 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3c100fb7-17b4-4c87-a38f-31df9569d29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.606 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44363207-5c8d-41c4-a39a-7c0a92b2d539]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.622 2 DEBUG nova.compute.manager [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.622 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd8dbe5-73e4-40e4-9719-2d2916d68c30]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703423, 'reachable_time': 24634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278826, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.623 2 DEBUG oslo_concurrency.lockutils [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.623 2 DEBUG oslo_concurrency.lockutils [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.623 2 DEBUG oslo_concurrency.lockutils [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.624 2 DEBUG nova.compute.manager [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.624 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:46:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.624 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f7aeb43b-8b8f-4d1a-8524-76140f32df8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.624 2 DEBUG nova.compute.manager [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:46:36 np0005466030 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct  2 08:46:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.948 2 INFO nova.virt.libvirt.driver [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deleting instance files /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_del#033[00m
Oct  2 08:46:36 np0005466030 nova_compute[230518]: 2025-10-02 12:46:36.949 2 INFO nova.virt.libvirt.driver [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deletion of /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_del complete#033[00m
Oct  2 08:46:37 np0005466030 nova_compute[230518]: 2025-10-02 12:46:37.217 2 INFO nova.compute.manager [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 1.09 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:46:37 np0005466030 nova_compute[230518]: 2025-10-02 12:46:37.218 2 DEBUG oslo.service.loopingcall [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:46:37 np0005466030 nova_compute[230518]: 2025-10-02 12:46:37.218 2 DEBUG nova.compute.manager [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:46:37 np0005466030 nova_compute[230518]: 2025-10-02 12:46:37.219 2 DEBUG nova.network.neutron [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:46:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Oct  2 08:46:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:37 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:37.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:37.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Oct  2 08:46:38 np0005466030 nova_compute[230518]: 2025-10-02 12:46:38.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:38 np0005466030 nova_compute[230518]: 2025-10-02 12:46:38.828 2 DEBUG nova.compute.manager [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:38 np0005466030 nova_compute[230518]: 2025-10-02 12:46:38.828 2 DEBUG oslo_concurrency.lockutils [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:38 np0005466030 nova_compute[230518]: 2025-10-02 12:46:38.828 2 DEBUG oslo_concurrency.lockutils [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:38 np0005466030 nova_compute[230518]: 2025-10-02 12:46:38.829 2 DEBUG oslo_concurrency.lockutils [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:38 np0005466030 nova_compute[230518]: 2025-10-02 12:46:38.829 2 DEBUG nova.compute.manager [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:46:38 np0005466030 nova_compute[230518]: 2025-10-02 12:46:38.829 2 WARNING nova.compute.manager [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:46:39 np0005466030 nova_compute[230518]: 2025-10-02 12:46:39.258 2 DEBUG nova.network.neutron [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:46:39 np0005466030 nova_compute[230518]: 2025-10-02 12:46:39.294 2 INFO nova.compute.manager [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 2.08 seconds to deallocate network for instance.#033[00m
Oct  2 08:46:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Oct  2 08:46:39 np0005466030 nova_compute[230518]: 2025-10-02 12:46:39.376 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:39 np0005466030 nova_compute[230518]: 2025-10-02 12:46:39.377 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:39 np0005466030 nova_compute[230518]: 2025-10-02 12:46:39.463 2 DEBUG oslo_concurrency.processutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:39 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:39.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:39.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2370722550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:39 np0005466030 nova_compute[230518]: 2025-10-02 12:46:39.894 2 DEBUG oslo_concurrency.processutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:39 np0005466030 nova_compute[230518]: 2025-10-02 12:46:39.903 2 DEBUG nova.compute.provider_tree [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:40 np0005466030 nova_compute[230518]: 2025-10-02 12:46:40.653 2 DEBUG nova.scheduler.client.report [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:40 np0005466030 nova_compute[230518]: 2025-10-02 12:46:40.710 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:40 np0005466030 nova_compute[230518]: 2025-10-02 12:46:40.748 2 INFO nova.scheduler.client.report [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Deleted allocations for instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f#033[00m
Oct  2 08:46:40 np0005466030 nova_compute[230518]: 2025-10-02 12:46:40.974 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:41 np0005466030 nova_compute[230518]: 2025-10-02 12:46:41.077 2 DEBUG nova.compute.manager [req-6626f24a-be3e-4581-b096-23a44a6c1a3c req-75992d98-5c77-4b5f-8138-77cf1844033a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-deleted-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:46:41 np0005466030 nova_compute[230518]: 2025-10-02 12:46:41.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:41 np0005466030 nova_compute[230518]: 2025-10-02 12:46:41.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:41.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:41 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:41.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:43 np0005466030 nova_compute[230518]: 2025-10-02 12:46:43.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:43 np0005466030 podman[278852]: 2025-10-02 12:46:43.83917759 +0000 UTC m=+0.086447300 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:46:43 np0005466030 podman[278851]: 2025-10-02 12:46:43.839265322 +0000 UTC m=+0.093571463 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:46:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:43.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:43 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:43.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:45 np0005466030 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 08:46:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:45.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:46:45 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:45.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:46:46 np0005466030 nova_compute[230518]: 2025-10-02 12:46:46.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Oct  2 08:46:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:47.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:47 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:47.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Oct  2 08:46:48 np0005466030 nova_compute[230518]: 2025-10-02 12:46:48.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:46:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:46:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:46:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:46:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:49.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:49.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Oct  2 08:46:51 np0005466030 nova_compute[230518]: 2025-10-02 12:46:51.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005466030 nova_compute[230518]: 2025-10-02 12:46:51.360 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409196.358669, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:46:51 np0005466030 nova_compute[230518]: 2025-10-02 12:46:51.360 2 INFO nova.compute.manager [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:46:51 np0005466030 nova_compute[230518]: 2025-10-02 12:46:51.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:51 np0005466030 nova_compute[230518]: 2025-10-02 12:46:51.409 2 DEBUG nova.compute.manager [None req-9e74bb75-2830-48e7-8edf-840a74090827 - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:46:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:51 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Oct  2 08:46:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:53 np0005466030 nova_compute[230518]: 2025-10-02 12:46:53.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:53.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:53 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:54 np0005466030 nova_compute[230518]: 2025-10-02 12:46:54.901 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:54 np0005466030 nova_compute[230518]: 2025-10-02 12:46:54.902 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:54 np0005466030 nova_compute[230518]: 2025-10-02 12:46:54.928 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.080 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.081 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.088 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.088 2 INFO nova.compute.claims [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.337 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:46:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2657385976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:46:55 np0005466030 podman[278917]: 2025-10-02 12:46:55.806223168 +0000 UTC m=+0.056120006 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.810 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:55 np0005466030 podman[278918]: 2025-10-02 12:46:55.815647675 +0000 UTC m=+0.060296637 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.817 2 DEBUG nova.compute.provider_tree [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.839 2 DEBUG nova.scheduler.client.report [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:46:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:55.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.878 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:55 np0005466030 nova_compute[230518]: 2025-10-02 12:46:55.879 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.111 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.111 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.133 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.159 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.261 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.263 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.263 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Creating image(s)#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.291 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.322 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.351 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.354 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.419 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.420 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.420 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.421 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.446 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.450 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1c4025f8-834f-474c-87ee-59600e6ffb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:46:56 np0005466030 nova_compute[230518]: 2025-10-02 12:46:56.536 2 DEBUG nova.policy [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:46:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.185 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1c4025f8-834f-474c-87ee-59600e6ffb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.255 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:46:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:46:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:46:57 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:57.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:57.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.964 2 DEBUG nova.objects.instance [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.996 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.996 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Ensure instance console log exists: /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.997 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.997 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:46:57 np0005466030 nova_compute[230518]: 2025-10-02 12:46:57.997 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:46:58 np0005466030 nova_compute[230518]: 2025-10-02 12:46:58.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:58 np0005466030 nova_compute[230518]: 2025-10-02 12:46:58.829 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Successfully created port: 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:46:59 np0005466030 nova_compute[230518]: 2025-10-02 12:46:59.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:46:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:46:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:46:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:59 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:59.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:46:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:46:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:59.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.505 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Successfully updated port: 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.533 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.533 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.533 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.651 2 DEBUG nova.compute.manager [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.652 2 DEBUG nova.compute.manager [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing instance network info cache due to event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.652 2 DEBUG oslo_concurrency.lockutils [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:00 np0005466030 nova_compute[230518]: 2025-10-02 12:47:00.768 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:47:01 np0005466030 nova_compute[230518]: 2025-10-02 12:47:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:01.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.079 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.344 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.377 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.377 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance network_info: |[{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.378 2 DEBUG oslo_concurrency.lockutils [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.378 2 DEBUG nova.network.neutron [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.381 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start _get_guest_xml network_info=[{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.386 2 WARNING nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.390 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.391 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.394 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.394 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.396 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.396 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.396 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.399 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:47:02 np0005466030 nova_compute[230518]: 2025-10-02 12:47:02.402 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/773088728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.131 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.163 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.167 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.232 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.232 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.232 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/604551742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.599 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.600 2 DEBUG nova.virt.libvirt.vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:56Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.601 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.602 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.603 2 DEBUG nova.objects.instance [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/887990936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.669 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.810 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <uuid>1c4025f8-834f-474c-87ee-59600e6ffb96</uuid>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <name>instance-0000007e</name>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <nova:name>tempest-DeleteServersTestJSON-server-1232591881</nova:name>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:47:02</nova:creationTime>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <nova:port uuid="8001e1a0-d1c2-49ac-8630-690ed8ac9801">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <entry name="serial">1c4025f8-834f-474c-87ee-59600e6ffb96</entry>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <entry name="uuid">1c4025f8-834f-474c-87ee-59600e6ffb96</entry>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/1c4025f8-834f-474c-87ee-59600e6ffb96_disk">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:48:3d:b5"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <target dev="tap8001e1a0-d1"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/console.log" append="off"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:47:03 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:47:03 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:47:03 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:47:03 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.812 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Preparing to wait for external event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.812 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.813 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.813 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.814 2 DEBUG nova.virt.libvirt.vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:56Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.814 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.815 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.815 2 DEBUG os_vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8001e1a0-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8001e1a0-d1, col_values=(('external_ids', {'iface-id': '8001e1a0-d1c2-49ac-8630-690ed8ac9801', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:3d:b5', 'vm-uuid': '1c4025f8-834f-474c-87ee-59600e6ffb96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:47:03 np0005466030 NetworkManager[44960]: <info>  [1759409223.8267] manager: (tap8001e1a0-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.833 2 INFO os_vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')#033[00m
Oct  2 08:47:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:47:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:03 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.887 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4436MB free_disk=20.909137725830078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.889 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:03 np0005466030 nova_compute[230518]: 2025-10-02 12:47:03.889 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.005 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.005 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.006 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:48:3d:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.006 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Using config drive#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.034 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 1c4025f8-834f-474c-87ee-59600e6ffb96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.124 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1011230374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.574 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.581 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.604 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.641 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.641 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.711 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Creating config drive at /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.716 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfdsqtdw7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.755 2 DEBUG nova.network.neutron [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updated VIF entry in instance network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.756 2 DEBUG nova.network.neutron [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.790 2 DEBUG oslo_concurrency.lockutils [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.869 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfdsqtdw7" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.898 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:04 np0005466030 nova_compute[230518]: 2025-10-02 12:47:04.901 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:47:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:47:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:47:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.247 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.248 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Deleting local config drive /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config because it was imported into RBD.#033[00m
Oct  2 08:47:05 np0005466030 NetworkManager[44960]: <info>  [1759409225.3046] manager: (tap8001e1a0-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Oct  2 08:47:05 np0005466030 kernel: tap8001e1a0-d1: entered promiscuous mode
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:05Z|00524|binding|INFO|Claiming lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 for this chassis.
Oct  2 08:47:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:05Z|00525|binding|INFO|8001e1a0-d1c2-49ac-8630-690ed8ac9801: Claiming fa:16:3e:48:3d:b5 10.100.0.5
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.318 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:3d:b5 10.100.0.5'], port_security=['fa:16:3e:48:3d:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c4025f8-834f-474c-87ee-59600e6ffb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8001e1a0-d1c2-49ac-8630-690ed8ac9801) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.319 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.320 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24#033[00m
Oct  2 08:47:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:05Z|00526|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 ovn-installed in OVS
Oct  2 08:47:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:05Z|00527|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 up in Southbound
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.336 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c66893-9e12-490f-bb38-89fefdb9d9a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.337 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.341 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.341 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b604c94-8f69-47c9-80fb-eb18f96785a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.342 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[13be1164-384a-4603-a9ab-7cb219a92dc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 systemd-machined[188247]: New machine qemu-62-instance-0000007e.
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.355 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6fec6e-c6e6-469b-9aba-42cf6806f376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 systemd[1]: Started Virtual Machine qemu-62-instance-0000007e.
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.371 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c35842ee-582c-41d4-8ebf-65dbd31c67b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 systemd-udevd[279305]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:47:05 np0005466030 NetworkManager[44960]: <info>  [1759409225.3849] device (tap8001e1a0-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:47:05 np0005466030 NetworkManager[44960]: <info>  [1759409225.3861] device (tap8001e1a0-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.400 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4780e9a2-c74e-4de4-942e-c2a2c68edd76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 systemd-udevd[279309]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[456d6d76-dcaf-4340-b4bc-ac90e2af84fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 NetworkManager[44960]: <info>  [1759409225.4069] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.441 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2675d3-b5ad-494b-b385-54628c39a494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.444 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcd57d1-8476-415d-9088-32a4578f57c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 NetworkManager[44960]: <info>  [1759409225.4703] device (tapfd4432c5-b0): carrier: link connected
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.475 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cd60817c-52f3-4619-b044-15fc840978b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.492 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e088b01-8427-4a99-8b2c-f6be1780f992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708902, 'reachable_time': 34511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279335, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.508 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[134bc215-7dd1-45d7-a4d4-1916c386b951]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708902, 'tstamp': 708902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279336, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.526 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9524dfe0-6650-462e-8436-ddb5bbc5b6ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708902, 'reachable_time': 34511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279337, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.554 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5841e1-ee72-4454-984d-ffb942c0db6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.608 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fc703a-7ec5-43c9-a11f-f67ba96c01b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.610 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.610 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.611 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:05 np0005466030 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct  2 08:47:05 np0005466030 NetworkManager[44960]: <info>  [1759409225.6139] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.618 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:05Z|00528|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.623 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.624 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5796828b-ae5a-42b0-a6cb-a4e99503bf68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.625 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:47:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.627 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.642 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.682 2 DEBUG nova.compute.manager [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.683 2 DEBUG oslo_concurrency.lockutils [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.683 2 DEBUG oslo_concurrency.lockutils [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.683 2 DEBUG oslo_concurrency.lockutils [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:05 np0005466030 nova_compute[230518]: 2025-10-02 12:47:05.684 2 DEBUG nova.compute.manager [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Processing event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:47:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Oct  2 08:47:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:47:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:05 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:06 np0005466030 podman[279369]: 2025-10-02 12:47:06.000727562 +0000 UTC m=+0.060340568 container create 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:47:06 np0005466030 systemd[1]: Started libpod-conmon-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5.scope.
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:47:06 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:47:06 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fed3ce3b5bf83c329c5a1cdc2d4894017038ce5e7f2b6cb2909063ada580d1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:47:06 np0005466030 podman[279369]: 2025-10-02 12:47:05.963556634 +0000 UTC m=+0.023169650 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:47:06 np0005466030 podman[279369]: 2025-10-02 12:47:06.073687337 +0000 UTC m=+0.133300343 container init 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 08:47:06 np0005466030 podman[279369]: 2025-10-02 12:47:06.07920862 +0000 UTC m=+0.138821606 container start 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:47:06 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : New worker (279391) forked
Oct  2 08:47:06 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : Loading success.
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.857 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.858 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409226.8571339, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.859 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Started (Lifecycle Event)#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.862 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.865 2 INFO nova.virt.libvirt.driver [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance spawned successfully.#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.865 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.909 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.910 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.910 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.911 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.911 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.912 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.960 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:06 np0005466030 nova_compute[230518]: 2025-10-02 12:47:06.963 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.003 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.004 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409226.858197, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.004 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.026 2 INFO nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Took 10.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.027 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.041 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.046 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409226.8616016, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.047 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.086 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.090 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.130 2 INFO nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Took 12.14 seconds to build instance.#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.163 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Oct  2 08:47:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.885 2 DEBUG nova.compute.manager [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG oslo_concurrency.lockutils [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG oslo_concurrency.lockutils [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG oslo_concurrency.lockutils [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG nova.compute.manager [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:07 np0005466030 nova_compute[230518]: 2025-10-02 12:47:07.886 2 WARNING nova.compute.manager [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:47:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct  2 08:47:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:07 np0005466030 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:08 np0005466030 nova_compute[230518]: 2025-10-02 12:47:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:08Z|00529|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct  2 08:47:08 np0005466030 nova_compute[230518]: 2025-10-02 12:47:08.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:08Z|00530|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct  2 08:47:08 np0005466030 nova_compute[230518]: 2025-10-02 12:47:08.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:08 np0005466030 nova_compute[230518]: 2025-10-02 12:47:08.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:08 np0005466030 nova_compute[230518]: 2025-10-02 12:47:08.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:09 np0005466030 nova_compute[230518]: 2025-10-02 12:47:09.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:09.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:10 np0005466030 nova_compute[230518]: 2025-10-02 12:47:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:10 np0005466030 nova_compute[230518]: 2025-10-02 12:47:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:47:10 np0005466030 nova_compute[230518]: 2025-10-02 12:47:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:47:10 np0005466030 nova_compute[230518]: 2025-10-02 12:47:10.126 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:10 np0005466030 nova_compute[230518]: 2025-10-02 12:47:10.127 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:10 np0005466030 nova_compute[230518]: 2025-10-02 12:47:10.127 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:47:10 np0005466030 nova_compute[230518]: 2025-10-02 12:47:10.128 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:11.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Oct  2 08:47:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:13 np0005466030 nova_compute[230518]: 2025-10-02 12:47:13.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005466030 nova_compute[230518]: 2025-10-02 12:47:13.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:14 np0005466030 nova_compute[230518]: 2025-10-02 12:47:14.159 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:14 np0005466030 podman[279444]: 2025-10-02 12:47:14.832333069 +0000 UTC m=+0.080200123 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:47:14 np0005466030 podman[279443]: 2025-10-02 12:47:14.869420535 +0000 UTC m=+0.116936838 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:47:15 np0005466030 nova_compute[230518]: 2025-10-02 12:47:15.201 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:15 np0005466030 nova_compute[230518]: 2025-10-02 12:47:15.201 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:47:15 np0005466030 nova_compute[230518]: 2025-10-02 12:47:15.201 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:47:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:15.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:17 np0005466030 nova_compute[230518]: 2025-10-02 12:47:17.074 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:17 np0005466030 nova_compute[230518]: 2025-10-02 12:47:17.077 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:17 np0005466030 nova_compute[230518]: 2025-10-02 12:47:17.078 2 DEBUG nova.network.neutron [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.144168) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237144209, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2543, "num_deletes": 262, "total_data_size": 5866672, "memory_usage": 5938296, "flush_reason": "Manual Compaction"}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237192499, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3845464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49804, "largest_seqno": 52342, "table_properties": {"data_size": 3834771, "index_size": 6931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22643, "raw_average_key_size": 21, "raw_value_size": 3813334, "raw_average_value_size": 3573, "num_data_blocks": 297, "num_entries": 1067, "num_filter_entries": 1067, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409048, "oldest_key_time": 1759409048, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 48381 microseconds, and 8812 cpu microseconds.
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.192546) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3845464 bytes OK
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.192567) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.196648) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.196673) EVENT_LOG_v1 {"time_micros": 1759409237196666, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.196697) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5855197, prev total WAL file size 5876256, number of live WAL files 2.
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.198831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3755KB)], [99(9043KB)]
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237198966, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13106122, "oldest_snapshot_seqno": -1}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 7671 keys, 11159525 bytes, temperature: kUnknown
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237336306, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11159525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11108234, "index_size": 31019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198377, "raw_average_key_size": 25, "raw_value_size": 10971393, "raw_average_value_size": 1430, "num_data_blocks": 1219, "num_entries": 7671, "num_filter_entries": 7671, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:47:17 np0005466030 podman[279752]: 2025-10-02 12:47:17.269716001 +0000 UTC m=+0.022540629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.336722) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11159525 bytes
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.387689) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.3 rd, 81.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8207, records dropped: 536 output_compression: NoCompression
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.387735) EVENT_LOG_v1 {"time_micros": 1759409237387719, "job": 62, "event": "compaction_finished", "compaction_time_micros": 137479, "compaction_time_cpu_micros": 29609, "output_level": 6, "num_output_files": 1, "total_output_size": 11159525, "num_input_records": 8207, "num_output_records": 7671, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237388856, "job": 62, "event": "table_file_deletion", "file_number": 101}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237390459, "job": 62, "event": "table_file_deletion", "file_number": 99}
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.198701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:17 np0005466030 podman[279752]: 2025-10-02 12:47:17.402317071 +0000 UTC m=+0.155141669 container create 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  2 08:47:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:17 np0005466030 systemd[1]: Started libpod-conmon-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope.
Oct  2 08:47:17 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:47:17 np0005466030 podman[279752]: 2025-10-02 12:47:17.619687716 +0000 UTC m=+0.372512314 container init 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:47:17 np0005466030 podman[279752]: 2025-10-02 12:47:17.628786882 +0000 UTC m=+0.381611520 container start 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:47:17 np0005466030 podman[279752]: 2025-10-02 12:47:17.666794557 +0000 UTC m=+0.419619155 container attach 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:47:17 np0005466030 laughing_bose[279769]: 167 167
Oct  2 08:47:17 np0005466030 systemd[1]: libpod-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope: Deactivated successfully.
Oct  2 08:47:17 np0005466030 conmon[279769]: conmon 4d56b8966ccf13f819d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope/container/memory.events
Oct  2 08:47:17 np0005466030 podman[279752]: 2025-10-02 12:47:17.702316654 +0000 UTC m=+0.455141252 container died 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 08:47:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay-f6dbd000f8faa9b53022ec548d908b8619b70c7155b173a2b63c93e12d833389-merged.mount: Deactivated successfully.
Oct  2 08:47:18 np0005466030 podman[279752]: 2025-10-02 12:47:18.422624114 +0000 UTC m=+1.175448712 container remove 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 08:47:18 np0005466030 systemd[1]: libpod-conmon-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope: Deactivated successfully.
Oct  2 08:47:18 np0005466030 podman[279793]: 2025-10-02 12:47:18.614848449 +0000 UTC m=+0.029390855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 08:47:18 np0005466030 podman[279793]: 2025-10-02 12:47:18.779382332 +0000 UTC m=+0.193924738 container create b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 08:47:18 np0005466030 nova_compute[230518]: 2025-10-02 12:47:18.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:18 np0005466030 nova_compute[230518]: 2025-10-02 12:47:18.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:18 np0005466030 systemd[1]: Started libpod-conmon-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope.
Oct  2 08:47:18 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:47:18 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 08:47:18 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 08:47:18 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 08:47:18 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 08:47:19 np0005466030 podman[279793]: 2025-10-02 12:47:19.024606024 +0000 UTC m=+0.439148450 container init b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 08:47:19 np0005466030 podman[279793]: 2025-10-02 12:47:19.033503234 +0000 UTC m=+0.448045640 container start b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  2 08:47:19 np0005466030 podman[279793]: 2025-10-02 12:47:19.128640715 +0000 UTC m=+0.543183281 container attach b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  2 08:47:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct  2 08:47:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct  2 08:47:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:19.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]: [
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:    {
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "available": false,
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "ceph_device": false,
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "lsm_data": {},
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "lvs": [],
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "path": "/dev/sr0",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "rejected_reasons": [
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "Has a FileSystem",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "Insufficient space (<5GB)"
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        ],
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        "sys_api": {
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "actuators": null,
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "device_nodes": "sr0",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "devname": "sr0",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "human_readable_size": "482.00 KB",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "id_bus": "ata",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "model": "QEMU DVD-ROM",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "nr_requests": "2",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "parent": "/dev/sr0",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "partitions": {},
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "path": "/dev/sr0",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "removable": "1",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "rev": "2.5+",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "ro": "0",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "rotational": "0",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "sas_address": "",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "sas_device_handle": "",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "scheduler_mode": "mq-deadline",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "sectors": 0,
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "sectorsize": "2048",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "size": 493568.0,
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "support_discard": "2048",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "type": "disk",
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:            "vendor": "QEMU"
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:        }
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]:    }
Oct  2 08:47:20 np0005466030 friendly_sammet[279809]: ]
Oct  2 08:47:20 np0005466030 systemd[1]: libpod-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope: Deactivated successfully.
Oct  2 08:47:20 np0005466030 systemd[1]: libpod-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope: Consumed 1.160s CPU time.
Oct  2 08:47:20 np0005466030 podman[280960]: 2025-10-02 12:47:20.357881838 +0000 UTC m=+0.024459680 container died b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  2 08:47:20 np0005466030 systemd[1]: var-lib-containers-storage-overlay-f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f-merged.mount: Deactivated successfully.
Oct  2 08:47:20 np0005466030 podman[280960]: 2025-10-02 12:47:20.418949248 +0000 UTC m=+0.085527080 container remove b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 08:47:20 np0005466030 systemd[1]: libpod-conmon-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope: Deactivated successfully.
Oct  2 08:47:20 np0005466030 nova_compute[230518]: 2025-10-02 12:47:20.956 2 DEBUG nova.network.neutron [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.025 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.297 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.298 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Creating file /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.298 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:47:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.758 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp" returned: 1 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.760 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.760 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Creating directory /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.760 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:21 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:21Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:3d:b5 10.100.0.5
Oct  2 08:47:21 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:21Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:3d:b5 10.100.0.5
Oct  2 08:47:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:21.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:21.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.982 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:21 np0005466030 nova_compute[230518]: 2025-10-02 12:47:21.988 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:47:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:23 np0005466030 nova_compute[230518]: 2025-10-02 12:47:23.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:23 np0005466030 nova_compute[230518]: 2025-10-02 12:47:23.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:23.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:24 np0005466030 kernel: tap8001e1a0-d1 (unregistering): left promiscuous mode
Oct  2 08:47:24 np0005466030 NetworkManager[44960]: <info>  [1759409244.6846] device (tap8001e1a0-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:47:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:24Z|00531|binding|INFO|Releasing lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 from this chassis (sb_readonly=0)
Oct  2 08:47:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:24Z|00532|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 down in Southbound
Oct  2 08:47:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:24Z|00533|binding|INFO|Removing iface tap8001e1a0-d1 ovn-installed in OVS
Oct  2 08:47:24 np0005466030 nova_compute[230518]: 2025-10-02 12:47:24.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:24 np0005466030 nova_compute[230518]: 2025-10-02 12:47:24.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:24 np0005466030 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct  2 08:47:24 np0005466030 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000007e.scope: Consumed 14.626s CPU time.
Oct  2 08:47:24 np0005466030 systemd-machined[188247]: Machine qemu-62-instance-0000007e terminated.
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.004 2 INFO nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.008 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:3d:b5 10.100.0.5'], port_security=['fa:16:3e:48:3d:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c4025f8-834f-474c-87ee-59600e6ffb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8001e1a0-d1c2-49ac-8630-690ed8ac9801) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.009 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.011 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.013 2 INFO nova.virt.libvirt.driver [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance destroyed successfully.#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.012 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[03604aa0-cf85-42e1-9297-e12d10b5b6c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.013 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.014 2 DEBUG nova.virt.libvirt.vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:13Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.015 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.015 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.016 2 DEBUG os_vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8001e1a0-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.024 2 INFO os_vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.028 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.029 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:47:25 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : haproxy version is 2.8.14-c23fe91
Oct  2 08:47:25 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : path to executable is /usr/sbin/haproxy
Oct  2 08:47:25 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [WARNING]  (279389) : Exiting Master process...
Oct  2 08:47:25 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [WARNING]  (279389) : Exiting Master process...
Oct  2 08:47:25 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [ALERT]    (279389) : Current worker (279391) exited with code 143 (Terminated)
Oct  2 08:47:25 np0005466030 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [WARNING]  (279389) : All workers exited. Exiting... (0)
Oct  2 08:47:25 np0005466030 systemd[1]: libpod-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5.scope: Deactivated successfully.
Oct  2 08:47:25 np0005466030 podman[281012]: 2025-10-02 12:47:25.162664706 +0000 UTC m=+0.050186560 container died 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:47:25 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5-userdata-shm.mount: Deactivated successfully.
Oct  2 08:47:25 np0005466030 systemd[1]: var-lib-containers-storage-overlay-2fed3ce3b5bf83c329c5a1cdc2d4894017038ce5e7f2b6cb2909063ada580d1f-merged.mount: Deactivated successfully.
Oct  2 08:47:25 np0005466030 podman[281012]: 2025-10-02 12:47:25.252275783 +0000 UTC m=+0.139797637 container cleanup 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:47:25 np0005466030 systemd[1]: libpod-conmon-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5.scope: Deactivated successfully.
Oct  2 08:47:25 np0005466030 podman[281043]: 2025-10-02 12:47:25.30908496 +0000 UTC m=+0.036921372 container remove 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.315 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4696785-5d60-45f9-b43f-f3ab39a0a18e]: (4, ('Thu Oct  2 12:47:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5)\n8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5\nThu Oct  2 12:47:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5)\n8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[452f866f-819c-4c46-b451-66584dc4e04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.318 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:25 np0005466030 kernel: tapfd4432c5-b0: left promiscuous mode
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.338 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[24a4c221-69c0-41da-b788-7f4c9854cfac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6ffbd8-e6f6-465c-8496-22f05aa86664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.370 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64483c0c-9001-42ce-8c39-4c40a074183f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.388 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4ce390-ea81-4314-b6a9-c2ccc782289e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708895, 'reachable_time': 18278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281059, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.392 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.392 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fecdbb6a-0a22-40a3-aee2-f90bfb0f7967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:25 np0005466030 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.671 2 DEBUG neutronclient.v2_0.client [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.882 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.883 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:25 np0005466030 nova_compute[230518]: 2025-10-02 12:47:25.883 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.943 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.943 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:25.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:26 np0005466030 nova_compute[230518]: 2025-10-02 12:47:26.566 2 DEBUG nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:26 np0005466030 nova_compute[230518]: 2025-10-02 12:47:26.567 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:26 np0005466030 nova_compute[230518]: 2025-10-02 12:47:26.568 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:26 np0005466030 nova_compute[230518]: 2025-10-02 12:47:26.568 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:26 np0005466030 nova_compute[230518]: 2025-10-02 12:47:26.568 2 DEBUG nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:26 np0005466030 nova_compute[230518]: 2025-10-02 12:47:26.569 2 WARNING nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:47:26 np0005466030 podman[281060]: 2025-10-02 12:47:26.807428686 +0000 UTC m=+0.056757796 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct  2 08:47:26 np0005466030 podman[281061]: 2025-10-02 12:47:26.811394091 +0000 UTC m=+0.059420860 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 08:47:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:27.867 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:27 np0005466030 nova_compute[230518]: 2025-10-02 12:47:27.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:27.868 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:47:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:27.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.812 2 DEBUG nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.812 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.813 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.813 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.813 2 DEBUG nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.813 2 WARNING nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.842 2 DEBUG nova.compute.manager [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.842 2 DEBUG nova.compute.manager [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing instance network info cache due to event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.843 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.843 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:28 np0005466030 nova_compute[230518]: 2025-10-02 12:47:28.843 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:47:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:29.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:30 np0005466030 nova_compute[230518]: 2025-10-02 12:47:30.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:30 np0005466030 nova_compute[230518]: 2025-10-02 12:47:30.934 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updated VIF entry in instance network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:47:30 np0005466030 nova_compute[230518]: 2025-10-02 12:47:30.935 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:31 np0005466030 nova_compute[230518]: 2025-10-02 12:47:31.415 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Oct  2 08:47:33 np0005466030 nova_compute[230518]: 2025-10-02 12:47:33.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:35 np0005466030 nova_compute[230518]: 2025-10-02 12:47:35.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:35 np0005466030 nova_compute[230518]: 2025-10-02 12:47:35.112 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:35 np0005466030 nova_compute[230518]: 2025-10-02 12:47:35.113 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:35.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:37 np0005466030 nova_compute[230518]: 2025-10-02 12:47:37.227 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:47:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:37.870 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:37.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:38 np0005466030 nova_compute[230518]: 2025-10-02 12:47:38.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:39 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:39Z|00534|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 08:47:39 np0005466030 nova_compute[230518]: 2025-10-02 12:47:39.929 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409244.927992, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:39 np0005466030 nova_compute[230518]: 2025-10-02 12:47:39.929 2 INFO nova.compute.manager [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:47:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:39.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:40 np0005466030 nova_compute[230518]: 2025-10-02 12:47:40.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:40 np0005466030 nova_compute[230518]: 2025-10-02 12:47:40.122 2 DEBUG nova.compute.manager [None req-43c5a35c-a3dd-4a94-887e-b5e2a49cdf3f - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:40 np0005466030 nova_compute[230518]: 2025-10-02 12:47:40.126 2 DEBUG nova.compute.manager [None req-43c5a35c-a3dd-4a94-887e-b5e2a49cdf3f - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:40 np0005466030 nova_compute[230518]: 2025-10-02 12:47:40.229 2 INFO nova.compute.manager [None req-43c5a35c-a3dd-4a94-887e-b5e2a49cdf3f - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 08:47:41 np0005466030 nova_compute[230518]: 2025-10-02 12:47:41.098 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:41 np0005466030 nova_compute[230518]: 2025-10-02 12:47:41.099 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:41 np0005466030 nova_compute[230518]: 2025-10-02 12:47:41.109 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:47:41 np0005466030 nova_compute[230518]: 2025-10-02 12:47:41.109 2 INFO nova.compute.claims [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:47:41 np0005466030 nova_compute[230518]: 2025-10-02 12:47:41.660 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:41.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2583393535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.080 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.086 2 DEBUG nova.compute.provider_tree [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.185 2 DEBUG nova.scheduler.client.report [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.324 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.325 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.448 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.449 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:47:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.504 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.635 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:47:42 np0005466030 nova_compute[230518]: 2025-10-02 12:47:42.812 2 DEBUG nova.policy [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3cd62a3208649c183d3fc2edc1c0f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.654 2 INFO nova.virt.block_device [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Booting with volume aee976fe-a491-4491-adf3-8d226e48711d at /dev/vda#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.816 2 DEBUG os_brick.utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.817 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.827 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.827 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7594c18e-f7d0-4f14-b48a-e834dbf3fa20]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.829 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.836 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.837 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[61e9ff9f-bd1f-4168-920f-8abe84966b1a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.838 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.845 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.846 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3b7462-30e2-4c2e-86f4-b16d8c617d09]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.847 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf8bfd4-23a7-46e2-b328-a361eff1a93f]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.847 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.880 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.883 2 DEBUG os_brick.initiator.connectors.lightos [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.883 2 DEBUG os_brick.initiator.connectors.lightos [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.883 2 DEBUG os_brick.initiator.connectors.lightos [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.884 2 DEBUG os_brick.utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:47:43 np0005466030 nova_compute[230518]: 2025-10-02 12:47:43.884 2 DEBUG nova.virt.block_device [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating existing volume attachment record: 9d9b2116-d013-4f8a-99a9-6ebdcb06d0df _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:47:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:43.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:45 np0005466030 nova_compute[230518]: 2025-10-02 12:47:45.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:45 np0005466030 nova_compute[230518]: 2025-10-02 12:47:45.175 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:45 np0005466030 nova_compute[230518]: 2025-10-02 12:47:45.175 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:45 np0005466030 nova_compute[230518]: 2025-10-02 12:47:45.176 2 DEBUG nova.compute.manager [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Going to confirm migration 16 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 08:47:45 np0005466030 podman[281185]: 2025-10-02 12:47:45.816770878 +0000 UTC m=+0.060604997 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 08:47:45 np0005466030 podman[281184]: 2025-10-02 12:47:45.84675632 +0000 UTC m=+0.091762446 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 08:47:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:45.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:45.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:46 np0005466030 nova_compute[230518]: 2025-10-02 12:47:46.535 2 DEBUG neutronclient.v2_0.client [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:47:46 np0005466030 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:46 np0005466030 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:46 np0005466030 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG nova.network.neutron [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:47:46 np0005466030 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG nova.objects.instance [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'info_cache' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:46 np0005466030 nova_compute[230518]: 2025-10-02 12:47:46.755 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Successfully created port: bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.001 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.003 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.003 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Creating image(s)#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.003 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Ensure instance console log exists: /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:47 np0005466030 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:47.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:48 np0005466030 nova_compute[230518]: 2025-10-02 12:47:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:49 np0005466030 nova_compute[230518]: 2025-10-02 12:47:49.990 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:49 np0005466030 nova_compute[230518]: 2025-10-02 12:47:49.991 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.025 2 DEBUG nova.network.neutron [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.091 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.091 2 DEBUG nova.objects.instance [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.093 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.204 2 DEBUG nova.storage.rbd_utils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] removing snapshot(nova-resize) on rbd image(1c4025f8-834f-474c-87ee-59600e6ffb96_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.228 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.229 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.238 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.239 2 INFO nova.compute.claims [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.347 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Successfully updated port: bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.393 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.394 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.394 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.466 2 DEBUG nova.compute.manager [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.466 2 DEBUG nova.compute.manager [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.467 2 DEBUG oslo_concurrency.lockutils [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.478 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.567 2 DEBUG nova.virt.libvirt.vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:42Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.568 2 DEBUG nova.network.os_vif_util [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.569 2 DEBUG nova.network.os_vif_util [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.569 2 DEBUG os_vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8001e1a0-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.575 2 INFO os_vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.575 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.644 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:47:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2675420944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.988 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:50 np0005466030 nova_compute[230518]: 2025-10-02 12:47:50.993 2 DEBUG nova.compute.provider_tree [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.025 2 DEBUG nova.scheduler.client.report [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.070 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.071 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.074 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.165 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.165 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.216 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.251 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.297 2 DEBUG oslo_concurrency.processutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.345 2 INFO nova.virt.block_device [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Booting with volume 83afd020-c3e2-4cb0-a15d-83739807079d at /dev/vda#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.472 2 DEBUG nova.policy [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3cd62a3208649c183d3fc2edc1c0f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.510 2 DEBUG os_brick.utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.512 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.521 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.522 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e19801-9c94-427b-8f83-0edba981702a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.523 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.530 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.530 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[76aee0c4-1e79-46d4-9802-0e5cc810fcc8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.532 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.539 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.540 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f032d607-7d5d-4b02-ad26-b90c4a5c1168]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.542 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed1bd55-59ba-4f14-ab36-4791b97a9aef]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.543 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.577 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.580 2 DEBUG os_brick.initiator.connectors.lightos [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.581 2 DEBUG os_brick.initiator.connectors.lightos [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.581 2 DEBUG os_brick.initiator.connectors.lightos [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.582 2 DEBUG os_brick.utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.582 2 DEBUG nova.virt.block_device [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating existing volume attachment record: cd97b6b4-f57d-4d9e-b3b1-f6b4e1f9df0e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:47:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:47:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/560336633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.748 2 DEBUG oslo_concurrency.processutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:51 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.754 2 DEBUG nova.compute.provider_tree [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:47:51 np0005466030 nova_compute[230518]: 2025-10-02 12:47:51.780 2 DEBUG nova.scheduler.client.report [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:47:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.054 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.068 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.068 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.068 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.069 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.069 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.069 2 WARNING nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state resized and task_state deleting.#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.071 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.071 2 WARNING nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state resized and task_state deleting.#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.221 2 INFO nova.scheduler.client.report [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocation for migration 31253cf4-d4b6-4114-ba91-912bf75a32d5#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.345 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2696604845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.721 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.769 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.770 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance network_info: |[{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.770 2 DEBUG oslo_concurrency.lockutils [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.771 2 DEBUG nova.network.neutron [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.776 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start _get_guest_xml network_info=[{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aee976fe-a491-4491-adf3-8d226e48711d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aee976fe-a491-4491-adf3-8d226e48711d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c', 'attached_at': '', 'detached_at': '', 'volume_id': 'aee976fe-a491-4491-adf3-8d226e48711d', 'serial': 'aee976fe-a491-4491-adf3-8d226e48711d'}, 'boot_index': 0, 'attachment_id': '9d9b2116-d013-4f8a-99a9-6ebdcb06d0df', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.781 2 WARNING nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.787 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.788 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.795 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.796 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.797 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.797 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.798 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.798 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.799 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.799 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.799 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.801 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.834 2 DEBUG nova.storage.rbd_utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:52 np0005466030 nova_compute[230518]: 2025-10-02 12:47:52.839 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:47:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1678523138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:47:53 np0005466030 nova_compute[230518]: 2025-10-02 12:47:53.298 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:53 np0005466030 nova_compute[230518]: 2025-10-02 12:47:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.270 2 DEBUG nova.virt.libvirt.vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1530906949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1530906949',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-9rd4q9z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:42Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=4b2aefbb-92cb-4a24-9ad2-884a12fa514c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.270 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.271 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.272 2 DEBUG nova.objects.instance [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.531 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Successfully created port: a568d61d-6863-474f-83f4-ba38b88de19a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.827 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <uuid>4b2aefbb-92cb-4a24-9ad2-884a12fa514c</uuid>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <name>instance-00000082</name>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1530906949</nova:name>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:47:52</nova:creationTime>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:user uuid="e3cd62a3208649c183d3fc2edc1c0f18">tempest-TestInstancesWithCinderVolumes-621751307-project-member</nova:user>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:project uuid="d3e0300f3cf5493d8a9e62e2c4a95767">tempest-TestInstancesWithCinderVolumes-621751307</nova:project>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <nova:port uuid="bf58273a-e5f6-4e36-bb1e-7ca0c2462d54">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <entry name="serial">4b2aefbb-92cb-4a24-9ad2-884a12fa514c</entry>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <entry name="uuid">4b2aefbb-92cb-4a24-9ad2-884a12fa514c</entry>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-aee976fe-a491-4491-adf3-8d226e48711d">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <serial>aee976fe-a491-4491-adf3-8d226e48711d</serial>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:41:04:35"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <target dev="tapbf58273a-e5"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/console.log" append="off"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:47:54 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:47:54 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:47:54 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:47:54 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.827 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Preparing to wait for external event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.827 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.828 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.828 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.829 2 DEBUG nova.virt.libvirt.vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1530906949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1530906949',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-9rd4q9z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:42Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=4b2aefbb-92cb-4a24-9ad2-884a12fa514c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.829 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.829 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.830 2 DEBUG os_vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf58273a-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf58273a-e5, col_values=(('external_ids', {'iface-id': 'bf58273a-e5f6-4e36-bb1e-7ca0c2462d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:04:35', 'vm-uuid': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:54 np0005466030 NetworkManager[44960]: <info>  [1759409274.8380] manager: (tapbf58273a-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.843 2 INFO os_vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5')#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.985 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.986 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.986 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:41:04:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:47:54 np0005466030 nova_compute[230518]: 2025-10-02 12:47:54.986 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Using config drive#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.017 2 DEBUG nova.storage.rbd_utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.024 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.025 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.026 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Creating image(s)#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.026 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.026 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Ensure instance console log exists: /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.027 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.027 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:55 np0005466030 nova_compute[230518]: 2025-10-02 12:47:55.027 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:55.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.084 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Creating config drive at /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.091 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppf99kfo4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.222 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppf99kfo4" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.354 2 DEBUG nova.storage.rbd_utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.358 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.680 2 DEBUG nova.network.neutron [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.681 2 DEBUG nova.network.neutron [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.731 2 DEBUG oslo_concurrency.lockutils [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.974 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:47:56 np0005466030 nova_compute[230518]: 2025-10-02 12:47:56.975 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deleting local config drive /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config because it was imported into RBD.#033[00m
Oct  2 08:47:57 np0005466030 kernel: tapbf58273a-e5: entered promiscuous mode
Oct  2 08:47:57 np0005466030 NetworkManager[44960]: <info>  [1759409277.0498] manager: (tapbf58273a-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:57Z|00535|binding|INFO|Claiming lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for this chassis.
Oct  2 08:47:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:57Z|00536|binding|INFO|bf58273a-e5f6-4e36-bb1e-7ca0c2462d54: Claiming fa:16:3e:41:04:35 10.100.0.9
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.065 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:04:35 10.100.0.9'], port_security=['fa:16:3e:41:04:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.066 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa bound to our chassis#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.068 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3b4df3-6044-4a53-8039-c9a5c05725aa#033[00m
Oct  2 08:47:57 np0005466030 systemd-machined[188247]: New machine qemu-63-instance-00000082.
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.090 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[41701a26-afa5-46bb-983f-f3f9370a3c0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.091 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa3b4df3-61 in ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.093 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa3b4df3-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.093 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[523b2c98-233f-4b3d-ab41-4817a75e7ac7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.095 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce5ade1-87cd-4dca-b76d-5d1ba425462d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.107 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[51011997-f717-48b9-bbce-1d58b7d0cffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 systemd[1]: Started Virtual Machine qemu-63-instance-00000082.
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.121 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2de02f7-fd6e-4e75-ab9a-7631b5cd5c8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 systemd-udevd[281467]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:47:57 np0005466030 podman[281415]: 2025-10-02 12:47:57.144409188 +0000 UTC m=+0.116230326 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 08:47:57 np0005466030 NetworkManager[44960]: <info>  [1759409277.1550] device (tapbf58273a-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:47:57 np0005466030 NetworkManager[44960]: <info>  [1759409277.1569] device (tapbf58273a-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:47:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:57Z|00537|binding|INFO|Setting lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 ovn-installed in OVS
Oct  2 08:47:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:57Z|00538|binding|INFO|Setting lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 up in Southbound
Oct  2 08:47:57 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:57 np0005466030 podman[281420]: 2025-10-02 12:47:57.196588429 +0000 UTC m=+0.169627355 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.200 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5f377e-e81b-48a0-ba3c-9902e3f63dce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.205 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e77ded6f-5498-42d5-bc18-6ed727cbd199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 NetworkManager[44960]: <info>  [1759409277.2067] manager: (tapaa3b4df3-60): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Oct  2 08:47:57 np0005466030 systemd-udevd[281472]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.240 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f00bfc00-2984-40e0-ae26-fa6491173ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.242 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0fde2cf3-8568-420c-93f3-9836daf83f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 NetworkManager[44960]: <info>  [1759409277.2636] device (tapaa3b4df3-60): carrier: link connected
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.269 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a88ca0dd-08e9-44c1-9a95-706aad4933f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.284 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45adc37e-d753-4869-b6dc-44e97ac7eaab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 20272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281500, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.298 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db3cb08d-80a7-43a7-9ab7-1ca354c5197b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:817f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714082, 'tstamp': 714082}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281501, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.313 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[052d8370-de65-42c1-bdbf-a4b6183ebb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 20272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281502, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[66a0cd25-330e-43b6-b7d3-05f727d60260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[451c080d-b367-4c3e-9a4c-54fe786e916c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.425 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3b4df3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:57 np0005466030 NetworkManager[44960]: <info>  [1759409277.4276] manager: (tapaa3b4df3-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct  2 08:47:57 np0005466030 kernel: tapaa3b4df3-60: entered promiscuous mode
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.433 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3b4df3-60, col_values=(('external_ids', {'iface-id': 'fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:47:57 np0005466030 ovn_controller[129257]: 2025-10-02T12:47:57Z|00539|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.437 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.438 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79c48e7a-9db9-43f9-a852-39b43695ff0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.439 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:47:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.440 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'env', 'PROCESS_TAG=haproxy-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa3b4df3-6044-4a53-8039-c9a5c05725aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.476 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Successfully updated port: a568d61d-6863-474f-83f4-ba38b88de19a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.559 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.559 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:47:57 np0005466030 nova_compute[230518]: 2025-10-02 12:47:57.559 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:47:57 np0005466030 podman[281576]: 2025-10-02 12:47:57.803199994 +0000 UTC m=+0.052926576 container create b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:47:57 np0005466030 systemd[1]: Started libpod-conmon-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope.
Oct  2 08:47:57 np0005466030 podman[281576]: 2025-10-02 12:47:57.772660614 +0000 UTC m=+0.022387176 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:47:57 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:47:57 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c561c2b937b63684f4fa80d6a77b8b706c94b8f7ce5c4ada629e45417aa2f7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:47:57 np0005466030 podman[281576]: 2025-10-02 12:47:57.891123318 +0000 UTC m=+0.140849880 container init b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:47:57 np0005466030 podman[281576]: 2025-10-02 12:47:57.896155416 +0000 UTC m=+0.145881958 container start b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:47:57 np0005466030 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : New worker (281597) forked
Oct  2 08:47:57 np0005466030 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : Loading success.
Oct  2 08:47:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:47:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:47:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.166 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409278.1663046, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.167 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Started (Lifecycle Event)#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.235 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.240 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409278.166569, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.241 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.277 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.281 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.302 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.467 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.558 2 DEBUG nova.compute.manager [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.558 2 DEBUG oslo_concurrency.lockutils [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.559 2 DEBUG oslo_concurrency.lockutils [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.559 2 DEBUG oslo_concurrency.lockutils [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.559 2 DEBUG nova.compute.manager [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Processing event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.560 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.569 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409278.5633876, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.570 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.573 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.576 2 INFO nova.virt.libvirt.driver [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance spawned successfully.#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.576 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.594 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.597 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.622 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.622 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.623 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.623 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.623 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.624 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.628 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.702 2 INFO nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 11.70 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.702 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:47:58 np0005466030 nova_compute[230518]: 2025-10-02 12:47:58.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:59 np0005466030 nova_compute[230518]: 2025-10-02 12:47:59.147 2 INFO nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 18.97 seconds to build instance.#033[00m
Oct  2 08:47:59 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct  2 08:47:59 np0005466030 nova_compute[230518]: 2025-10-02 12:47:59.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:47:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:47:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:47:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:59.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:00 np0005466030 nova_compute[230518]: 2025-10-02 12:48:00.849 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.510 2 DEBUG nova.compute.manager [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.510 2 DEBUG nova.compute.manager [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.510 2 DEBUG oslo_concurrency.lockutils [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.636 2 DEBUG nova.compute.manager [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.636 2 DEBUG oslo_concurrency.lockutils [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.636 2 DEBUG oslo_concurrency.lockutils [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.637 2 DEBUG oslo_concurrency.lockutils [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.637 2 DEBUG nova.compute.manager [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] No waiting events found dispatching network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.637 2 WARNING nova.compute.manager [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received unexpected event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.700 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.754 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.755 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance network_info: |[{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.755 2 DEBUG oslo_concurrency.lockutils [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.756 2 DEBUG nova.network.neutron [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.760 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start _get_guest_xml network_info=[{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-83afd020-c3e2-4cb0-a15d-83739807079d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '83afd020-c3e2-4cb0-a15d-83739807079d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '3b348c58-f179-41db-bd79-1fdea0ade389', 'attached_at': '', 'detached_at': '', 'volume_id': '83afd020-c3e2-4cb0-a15d-83739807079d', 'serial': '83afd020-c3e2-4cb0-a15d-83739807079d'}, 'boot_index': 0, 'attachment_id': 'cd97b6b4-f57d-4d9e-b3b1-f6b4e1f9df0e', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.765 2 WARNING nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.773 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.774 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.779 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.780 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.782 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.783 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.784 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.785 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.785 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.786 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.786 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.787 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.788 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.789 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.789 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.790 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.826 2 DEBUG nova.storage.rbd_utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:01 np0005466030 nova_compute[230518]: 2025-10-02 12:48:01.831 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:01.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:48:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3094106259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.278 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.493 2 DEBUG nova.virt.libvirt.vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1396980789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1396980789',id=132,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-g7qut09a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:51Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=3b348c58-f179-41db-bd79-1fdea0ade389,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.494 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.495 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.496 2 DEBUG nova.objects.instance [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.638 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <uuid>3b348c58-f179-41db-bd79-1fdea0ade389</uuid>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <name>instance-00000084</name>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1396980789</nova:name>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:48:01</nova:creationTime>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:user uuid="e3cd62a3208649c183d3fc2edc1c0f18">tempest-TestInstancesWithCinderVolumes-621751307-project-member</nova:user>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:project uuid="d3e0300f3cf5493d8a9e62e2c4a95767">tempest-TestInstancesWithCinderVolumes-621751307</nova:project>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <nova:port uuid="a568d61d-6863-474f-83f4-ba38b88de19a">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <entry name="serial">3b348c58-f179-41db-bd79-1fdea0ade389</entry>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <entry name="uuid">3b348c58-f179-41db-bd79-1fdea0ade389</entry>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3b348c58-f179-41db-bd79-1fdea0ade389_disk.config">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-83afd020-c3e2-4cb0-a15d-83739807079d">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <serial>83afd020-c3e2-4cb0-a15d-83739807079d</serial>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:fa:2f:46"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <target dev="tapa568d61d-68"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/console.log" append="off"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:48:02 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:48:02 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:48:02 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:48:02 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.639 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Preparing to wait for external event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.639 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.639 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.640 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.640 2 DEBUG nova.virt.libvirt.vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1396980789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1396980789',id=132,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-g7qut09a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:51Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=3b348c58-f179-41db-bd79-1fdea0ade389,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.641 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.641 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.642 2 DEBUG os_vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa568d61d-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa568d61d-68, col_values=(('external_ids', {'iface-id': 'a568d61d-6863-474f-83f4-ba38b88de19a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:2f:46', 'vm-uuid': '3b348c58-f179-41db-bd79-1fdea0ade389'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:02 np0005466030 NetworkManager[44960]: <info>  [1759409282.6486] manager: (tapa568d61d-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:02 np0005466030 nova_compute[230518]: 2025-10-02 12:48:02.655 2 INFO os_vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68')#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.010 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.010 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.011 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:fa:2f:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.011 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Using config drive#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.031 2 DEBUG nova.storage.rbd_utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.418 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.419 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.578 2 DEBUG nova.objects.instance [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.609 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Creating config drive at /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.614 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperbori22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.644 2 DEBUG nova.network.neutron [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.645 2 DEBUG nova.network.neutron [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.748 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperbori22" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.773 2 DEBUG nova.storage.rbd_utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.777 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.809 2 DEBUG oslo_concurrency.lockutils [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:03 np0005466030 nova_compute[230518]: 2025-10-02 12:48:03.922 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.138 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.139 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.139 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.140 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.140 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.406 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.407 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deleting local config drive /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config because it was imported into RBD.#033[00m
Oct  2 08:48:04 np0005466030 kernel: tapa568d61d-68: entered promiscuous mode
Oct  2 08:48:04 np0005466030 NetworkManager[44960]: <info>  [1759409284.4747] manager: (tapa568d61d-68): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct  2 08:48:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:04Z|00540|binding|INFO|Claiming lport a568d61d-6863-474f-83f4-ba38b88de19a for this chassis.
Oct  2 08:48:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:04Z|00541|binding|INFO|a568d61d-6863-474f-83f4-ba38b88de19a: Claiming fa:16:3e:fa:2f:46 10.100.0.7
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:04 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:04Z|00542|binding|INFO|Setting lport a568d61d-6863-474f-83f4-ba38b88de19a ovn-installed in OVS
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:04 np0005466030 systemd-machined[188247]: New machine qemu-64-instance-00000084.
Oct  2 08:48:04 np0005466030 systemd-udevd[281740]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:48:04 np0005466030 systemd[1]: Started Virtual Machine qemu-64-instance-00000084.
Oct  2 08:48:04 np0005466030 NetworkManager[44960]: <info>  [1759409284.5416] device (tapa568d61d-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:48:04 np0005466030 NetworkManager[44960]: <info>  [1759409284.5429] device (tapa568d61d-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:48:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1078617232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:04 np0005466030 nova_compute[230518]: 2025-10-02 12:48:04.644 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:48:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:48:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:48:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:48:05 np0005466030 nova_compute[230518]: 2025-10-02 12:48:05.712 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409285.7104182, 3b348c58-f179-41db-bd79-1fdea0ade389 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:05 np0005466030 nova_compute[230518]: 2025-10-02 12:48:05.714 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Started (Lifecycle Event)#033[00m
Oct  2 08:48:05 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:05Z|00543|binding|INFO|Setting lport a568d61d-6863-474f-83f4-ba38b88de19a up in Southbound
Oct  2 08:48:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.957 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:2f:46 10.100.0.7'], port_security=['fa:16:3e:fa:2f:46 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3b348c58-f179-41db-bd79-1fdea0ade389', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a568d61d-6863-474f-83f4-ba38b88de19a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.960 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a568d61d-6863-474f-83f4-ba38b88de19a in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa bound to our chassis#033[00m
Oct  2 08:48:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.962 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3b4df3-6044-4a53-8039-c9a5c05725aa#033[00m
Oct  2 08:48:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:05.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.981 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[65c28e84-4d5a-4d1f-b921-ebfc0dab71a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:06.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.027 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cca725d3-6c44-4e22-ba4c-8d76acec1bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.032 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[099b7d1b-f131-43bc-97d6-d7ec89076845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.078 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b89a3c0a-8e1d-4dd2-9f4d-e15b9208afd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.116 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91d0320b-3cf0-4489-935d-cbcd14feecd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 20272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281798, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.142 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8297ca-4cb7-43b3-a351-a19128110113]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714093, 'tstamp': 714093}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281799, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714097, 'tstamp': 714097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281799, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.145 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.149 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3b4df3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.150 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3b4df3-60, col_values=(('external_ids', {'iface-id': 'fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.151 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.156 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409285.7107584, 3b348c58-f179-41db-bd79-1fdea0ade389 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.157 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.238 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.238 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.239 2 INFO nova.compute.manager [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attaching volume 59930c46-79e6-4eb5-b8a0-3382452117c0 to /dev/vdb#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.277 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.283 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.283 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.285 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.290 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.290 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.472 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.474 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4260MB free_disk=20.83069610595703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.474 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.474 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.481 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.662 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.662 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3b348c58-f179-41db-bd79-1fdea0ade389 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.663 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.663 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.825 2 DEBUG os_brick.utils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.826 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.838 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.838 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.839 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[4faf7f24-d900-4007-8924-cef2f77c08de]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.873 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.884 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.884 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7a737271-911e-4d74-8825-47876b533e32]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.886 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.900 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.901 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d6433885-85f9-4c43-a902-c3575ef9bd02]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.903 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[86e6de3c-eb3a-41bd-8f66-8dec9855102b]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.903 2 DEBUG oslo_concurrency.processutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.936 2 DEBUG oslo_concurrency.processutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.939 2 DEBUG os_brick.initiator.connectors.lightos [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.940 2 DEBUG os_brick.initiator.connectors.lightos [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.940 2 DEBUG os_brick.initiator.connectors.lightos [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.941 2 DEBUG os_brick.utils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (114ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:48:06 np0005466030 nova_compute[230518]: 2025-10-02 12:48:06.941 2 DEBUG nova.virt.block_device [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating existing volume attachment record: 04cb9bff-78f6-41d7-bf08-73f086d3a288 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.105 2 DEBUG nova.compute.manager [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.105 2 DEBUG oslo_concurrency.lockutils [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.105 2 DEBUG oslo_concurrency.lockutils [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.106 2 DEBUG oslo_concurrency.lockutils [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.106 2 DEBUG nova.compute.manager [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Processing event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.107 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.110 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409287.1100595, 3b348c58-f179-41db-bd79-1fdea0ade389 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.110 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.112 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.114 2 INFO nova.virt.libvirt.driver [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance spawned successfully.#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.115 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:48:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:48:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1625439027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.259 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.265 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.267 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.268 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.268 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.268 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.269 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.270 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.283 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.289 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.358 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.360 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.408 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.408 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.420 2 INFO nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 12.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.421 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:48:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.527 2 INFO nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 17.33 seconds to build instance.#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.721 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.963 2 DEBUG nova.objects.instance [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:07 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.997 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to attach volume 59930c46-79e6-4eb5-b8a0-3382452117c0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:07.999 2 DEBUG nova.virt.libvirt.guest [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:48:08 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-59930c46-79e6-4eb5-b8a0-3382452117c0">
Oct  2 08:48:08 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 08:48:08 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:  </auth>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:48:08 np0005466030 nova_compute[230518]:  <serial>59930c46-79e6-4eb5-b8a0-3382452117c0</serial>
Oct  2 08:48:08 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:48:08 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:48:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:08.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.180 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.180 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.180 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.181 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:41:04:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.408 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.409 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.409 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.539 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:08 np0005466030 nova_compute[230518]: 2025-10-02 12:48:08.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:09 np0005466030 nova_compute[230518]: 2025-10-02 12:48:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:09 np0005466030 nova_compute[230518]: 2025-10-02 12:48:09.350 2 DEBUG nova.compute.manager [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:09 np0005466030 nova_compute[230518]: 2025-10-02 12:48:09.351 2 DEBUG oslo_concurrency.lockutils [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:09 np0005466030 nova_compute[230518]: 2025-10-02 12:48:09.351 2 DEBUG oslo_concurrency.lockutils [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:09 np0005466030 nova_compute[230518]: 2025-10-02 12:48:09.352 2 DEBUG oslo_concurrency.lockutils [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:09 np0005466030 nova_compute[230518]: 2025-10-02 12:48:09.352 2 DEBUG nova.compute.manager [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] No waiting events found dispatching network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:48:09 np0005466030 nova_compute[230518]: 2025-10-02 12:48:09.352 2 WARNING nova.compute.manager [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received unexpected event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a for instance with vm_state active and task_state None.#033[00m
Oct  2 08:48:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.035 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.035 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.056 2 DEBUG nova.objects.instance [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.113 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:10 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:10Z|00544|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.422 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.423 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.424 2 INFO nova.compute.manager [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attaching volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707 to /dev/vdc#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.600 2 DEBUG os_brick.utils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.602 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.616 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.617 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[53f778fb-3eb1-4175-afcf-d81afd54276b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.618 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.627 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.627 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d295a253-5d8e-4b8e-91c8-fc78d4834f63]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.629 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.637 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.638 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[1879f6ae-f5ec-4ef6-8801-84464e17c6df]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.639 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[fb938594-cdc5-4389-a16e-529296498126]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.640 2 DEBUG oslo_concurrency.processutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.668 2 DEBUG oslo_concurrency.processutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.671 2 DEBUG os_brick.initiator.connectors.lightos [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.671 2 DEBUG os_brick.initiator.connectors.lightos [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.672 2 DEBUG os_brick.initiator.connectors.lightos [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.672 2 DEBUG os_brick.utils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:48:10 np0005466030 nova_compute[230518]: 2025-10-02 12:48:10.672 2 DEBUG nova.virt.block_device [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating existing volume attachment record: dd282639-493e-4fcf-8294-2dd5ce05fa0b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.376 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.377 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.378 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.379 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.760 2 DEBUG nova.objects.instance [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.808 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to attach volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:48:11 np0005466030 nova_compute[230518]: 2025-10-02 12:48:11.810 2 DEBUG nova.virt.libvirt.guest [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:48:11 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-58e4ef18-7a31-4027-a9e7-0cc5f7920707">
Oct  2 08:48:11 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 08:48:11 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:  </auth>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:48:11 np0005466030 nova_compute[230518]:  <serial>58e4ef18-7a31-4027-a9e7-0cc5f7920707</serial>
Oct  2 08:48:11 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:48:11 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:48:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:12 np0005466030 nova_compute[230518]: 2025-10-02 12:48:12.514 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:12 np0005466030 nova_compute[230518]: 2025-10-02 12:48:12.515 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:12 np0005466030 nova_compute[230518]: 2025-10-02 12:48:12.515 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:12 np0005466030 nova_compute[230518]: 2025-10-02 12:48:12.516 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:48:12 np0005466030 nova_compute[230518]: 2025-10-02 12:48:12.516 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:41:04:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:48:12 np0005466030 nova_compute[230518]: 2025-10-02 12:48:12.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:13 np0005466030 nova_compute[230518]: 2025-10-02 12:48:13.129 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:13 np0005466030 nova_compute[230518]: 2025-10-02 12:48:13.186 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:13 np0005466030 nova_compute[230518]: 2025-10-02 12:48:13.283 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:13 np0005466030 nova_compute[230518]: 2025-10-02 12:48:13.283 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:48:13 np0005466030 nova_compute[230518]: 2025-10-02 12:48:13.284 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:13 np0005466030 nova_compute[230518]: 2025-10-02 12:48:13.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:14.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:15 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:15Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:04:35 10.100.0.9
Oct  2 08:48:15 np0005466030 nova_compute[230518]: 2025-10-02 12:48:15.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:15 np0005466030 NetworkManager[44960]: <info>  [1759409295.9318] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Oct  2 08:48:15 np0005466030 NetworkManager[44960]: <info>  [1759409295.9328] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct  2 08:48:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:16Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:04:35 10.100.0.9
Oct  2 08:48:16 np0005466030 nova_compute[230518]: 2025-10-02 12:48:16.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:16 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:16Z|00545|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct  2 08:48:16 np0005466030 nova_compute[230518]: 2025-10-02 12:48:16.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:16 np0005466030 nova_compute[230518]: 2025-10-02 12:48:16.299 2 DEBUG nova.compute.manager [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:16 np0005466030 nova_compute[230518]: 2025-10-02 12:48:16.300 2 DEBUG nova.compute.manager [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:16 np0005466030 nova_compute[230518]: 2025-10-02 12:48:16.300 2 DEBUG oslo_concurrency.lockutils [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:16 np0005466030 nova_compute[230518]: 2025-10-02 12:48:16.301 2 DEBUG oslo_concurrency.lockutils [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:16 np0005466030 nova_compute[230518]: 2025-10-02 12:48:16.301 2 DEBUG nova.network.neutron [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:16 np0005466030 podman[281879]: 2025-10-02 12:48:16.830572761 +0000 UTC m=+0.052749000 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:48:16 np0005466030 podman[281878]: 2025-10-02 12:48:16.86616267 +0000 UTC m=+0.094780781 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:48:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:17 np0005466030 nova_compute[230518]: 2025-10-02 12:48:17.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:18.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:18 np0005466030 nova_compute[230518]: 2025-10-02 12:48:18.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.487 2 DEBUG nova.compute.manager [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.488 2 DEBUG nova.compute.manager [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.488 2 DEBUG oslo_concurrency.lockutils [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.805 2 DEBUG nova.network.neutron [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.806 2 DEBUG nova.network.neutron [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.865 2 DEBUG oslo_concurrency.lockutils [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.866 2 DEBUG oslo_concurrency.lockutils [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:19 np0005466030 nova_compute[230518]: 2025-10-02 12:48:19.866 2 DEBUG nova.network.neutron [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:20.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:20Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:2f:46 10.100.0.7
Oct  2 08:48:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:48:20Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:2f:46 10.100.0.7
Oct  2 08:48:21 np0005466030 nova_compute[230518]: 2025-10-02 12:48:21.617 2 DEBUG nova.network.neutron [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:21 np0005466030 nova_compute[230518]: 2025-10-02 12:48:21.618 2 DEBUG nova.network.neutron [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:21 np0005466030 nova_compute[230518]: 2025-10-02 12:48:21.700 2 DEBUG oslo_concurrency.lockutils [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:21.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:22.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:22 np0005466030 nova_compute[230518]: 2025-10-02 12:48:22.658 2 DEBUG nova.compute.manager [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:22 np0005466030 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG nova.compute.manager [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:22 np0005466030 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG oslo_concurrency.lockutils [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:22 np0005466030 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG oslo_concurrency.lockutils [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:22 np0005466030 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG nova.network.neutron [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:22 np0005466030 nova_compute[230518]: 2025-10-02 12:48:22.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005466030 nova_compute[230518]: 2025-10-02 12:48:23.279 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:48:23 np0005466030 nova_compute[230518]: 2025-10-02 12:48:23.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:24 np0005466030 nova_compute[230518]: 2025-10-02 12:48:24.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:25.944 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:25.945 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:25.945 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:26.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:26 np0005466030 nova_compute[230518]: 2025-10-02 12:48:26.711 2 DEBUG nova.network.neutron [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:26 np0005466030 nova_compute[230518]: 2025-10-02 12:48:26.711 2 DEBUG nova.network.neutron [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:26 np0005466030 nova_compute[230518]: 2025-10-02 12:48:26.776 2 DEBUG oslo_concurrency.lockutils [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Oct  2 08:48:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:27 np0005466030 nova_compute[230518]: 2025-10-02 12:48:27.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:27 np0005466030 podman[281924]: 2025-10-02 12:48:27.805052608 +0000 UTC m=+0.058851222 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:48:27 np0005466030 podman[281925]: 2025-10-02 12:48:27.813289677 +0000 UTC m=+0.060190423 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:48:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:28 np0005466030 nova_compute[230518]: 2025-10-02 12:48:28.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:28 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:30.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:32.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:32 np0005466030 nova_compute[230518]: 2025-10-02 12:48:32.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:32 np0005466030 nova_compute[230518]: 2025-10-02 12:48:32.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:32.683 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:48:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:32.684 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:48:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:48:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:48:33 np0005466030 nova_compute[230518]: 2025-10-02 12:48:33.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:34.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:34 np0005466030 nova_compute[230518]: 2025-10-02 12:48:34.368 2 DEBUG nova.compute.manager [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:34 np0005466030 nova_compute[230518]: 2025-10-02 12:48:34.369 2 DEBUG nova.compute.manager [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:34 np0005466030 nova_compute[230518]: 2025-10-02 12:48:34.369 2 DEBUG oslo_concurrency.lockutils [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:34 np0005466030 nova_compute[230518]: 2025-10-02 12:48:34.369 2 DEBUG oslo_concurrency.lockutils [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:34 np0005466030 nova_compute[230518]: 2025-10-02 12:48:34.370 2 DEBUG nova.network.neutron [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:36.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:36 np0005466030 nova_compute[230518]: 2025-10-02 12:48:36.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:37 np0005466030 nova_compute[230518]: 2025-10-02 12:48:37.056 2 DEBUG nova.network.neutron [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:37 np0005466030 nova_compute[230518]: 2025-10-02 12:48:37.056 2 DEBUG nova.network.neutron [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:37 np0005466030 nova_compute[230518]: 2025-10-02 12:48:37.094 2 DEBUG oslo_concurrency.lockutils [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:37 np0005466030 nova_compute[230518]: 2025-10-02 12:48:37.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:38.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:38.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:38 np0005466030 nova_compute[230518]: 2025-10-02 12:48:38.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:40.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:40.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.228 2 DEBUG oslo_concurrency.lockutils [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.229 2 DEBUG oslo_concurrency.lockutils [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.272 2 INFO nova.compute.manager [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Detaching volume 59930c46-79e6-4eb5-b8a0-3382452117c0#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.575 2 INFO nova.virt.block_device [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to driver detach volume 59930c46-79e6-4eb5-b8a0-3382452117c0 from mountpoint /dev/vdb#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.590 2 DEBUG nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdb from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.590 2 DEBUG nova.virt.libvirt.guest [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-59930c46-79e6-4eb5-b8a0-3382452117c0">
Oct  2 08:48:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <serial>59930c46-79e6-4eb5-b8a0-3382452117c0</serial>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:48:41 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.700 2 INFO nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config.#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.700 2 DEBUG nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:48:41 np0005466030 nova_compute[230518]: 2025-10-02 12:48:41.701 2 DEBUG nova.virt.libvirt.guest [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-59930c46-79e6-4eb5-b8a0-3382452117c0">
Oct  2 08:48:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <serial>59930c46-79e6-4eb5-b8a0-3382452117c0</serial>
Oct  2 08:48:41 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:48:41 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:48:41 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:48:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:42 np0005466030 nova_compute[230518]: 2025-10-02 12:48:42.050 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409322.0497136, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:48:42 np0005466030 nova_compute[230518]: 2025-10-02 12:48:42.052 2 DEBUG nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:48:42 np0005466030 nova_compute[230518]: 2025-10-02 12:48:42.055 2 INFO nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config.#033[00m
Oct  2 08:48:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:42 np0005466030 nova_compute[230518]: 2025-10-02 12:48:42.349 2 DEBUG nova.objects.instance [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:48:42 np0005466030 nova_compute[230518]: 2025-10-02 12:48:42.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:48:42.686 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:48:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Oct  2 08:48:42 np0005466030 nova_compute[230518]: 2025-10-02 12:48:42.804 2 DEBUG oslo_concurrency.lockutils [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:43 np0005466030 nova_compute[230518]: 2025-10-02 12:48:43.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:46.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:46.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:47 np0005466030 nova_compute[230518]: 2025-10-02 12:48:47.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:47 np0005466030 podman[282265]: 2025-10-02 12:48:47.80911583 +0000 UTC m=+0.055082383 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:48:47 np0005466030 podman[282264]: 2025-10-02 12:48:47.836572594 +0000 UTC m=+0.088313508 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:48:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:48.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:48.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.195 2 DEBUG oslo_concurrency.lockutils [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.195 2 DEBUG oslo_concurrency.lockutils [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.218 2 INFO nova.compute.manager [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Detaching volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.447 2 INFO nova.virt.block_device [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to driver detach volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707 from mountpoint /dev/vdc#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.455 2 DEBUG nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdc from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.456 2 DEBUG nova.virt.libvirt.guest [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-58e4ef18-7a31-4027-a9e7-0cc5f7920707">
Oct  2 08:48:48 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <serial>58e4ef18-7a31-4027-a9e7-0cc5f7920707</serial>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:48:48 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.620 2 INFO nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config.#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.621 2 DEBUG nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.621 2 DEBUG nova.virt.libvirt.guest [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-58e4ef18-7a31-4027-a9e7-0cc5f7920707">
Oct  2 08:48:48 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <serial>58e4ef18-7a31-4027-a9e7-0cc5f7920707</serial>
Oct  2 08:48:48 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct  2 08:48:48 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:48:48 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:48:48 np0005466030 nova_compute[230518]: 2025-10-02 12:48:48.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:49 np0005466030 nova_compute[230518]: 2025-10-02 12:48:49.506 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409329.5057685, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:48:49 np0005466030 nova_compute[230518]: 2025-10-02 12:48:49.507 2 DEBUG nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:48:49 np0005466030 nova_compute[230518]: 2025-10-02 12:48:49.510 2 INFO nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config.#033[00m
Oct  2 08:48:49 np0005466030 nova_compute[230518]: 2025-10-02 12:48:49.751 2 DEBUG nova.objects.instance [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:48:49 np0005466030 nova_compute[230518]: 2025-10-02 12:48:49.832 2 DEBUG oslo_concurrency.lockutils [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:48:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:50.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:50.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:48:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:48:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:48:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:48:51 np0005466030 nova_compute[230518]: 2025-10-02 12:48:51.627 2 DEBUG nova.compute.manager [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:48:51 np0005466030 nova_compute[230518]: 2025-10-02 12:48:51.627 2 DEBUG nova.compute.manager [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:48:51 np0005466030 nova_compute[230518]: 2025-10-02 12:48:51.627 2 DEBUG oslo_concurrency.lockutils [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:48:51 np0005466030 nova_compute[230518]: 2025-10-02 12:48:51.628 2 DEBUG oslo_concurrency.lockutils [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:48:51 np0005466030 nova_compute[230518]: 2025-10-02 12:48:51.628 2 DEBUG nova.network.neutron [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:48:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:52.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:52.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:52 np0005466030 nova_compute[230518]: 2025-10-02 12:48:52.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Oct  2 08:48:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:53 np0005466030 nova_compute[230518]: 2025-10-02 12:48:53.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:54.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:54.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:48:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:48:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:48:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:48:54 np0005466030 nova_compute[230518]: 2025-10-02 12:48:54.724 2 DEBUG nova.network.neutron [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:48:54 np0005466030 nova_compute[230518]: 2025-10-02 12:48:54.725 2 DEBUG nova.network.neutron [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:48:54 np0005466030 nova_compute[230518]: 2025-10-02 12:48:54.757 2 DEBUG oslo_concurrency.lockutils [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:48:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:48:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:48:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:56.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:57 np0005466030 nova_compute[230518]: 2025-10-02 12:48:57.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:48:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:48:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:58.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:48:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:48:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:58.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:48:58 np0005466030 podman[282310]: 2025-10-02 12:48:58.81384522 +0000 UTC m=+0.070818748 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:48:58 np0005466030 podman[282311]: 2025-10-02 12:48:58.826154537 +0000 UTC m=+0.069157696 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 08:48:58 np0005466030 nova_compute[230518]: 2025-10-02 12:48:58.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:00.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:00.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:02.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:02 np0005466030 nova_compute[230518]: 2025-10-02 12:49:02.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:03 np0005466030 nova_compute[230518]: 2025-10-02 12:49:03.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:04.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.096 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.096 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3544233615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.577 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.667 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.667 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.670 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.670 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.830 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.831 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4005MB free_disk=20.900550842285156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.831 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.831 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.950 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.950 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3b348c58-f179-41db-bd79-1fdea0ade389 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.950 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:49:05 np0005466030 nova_compute[230518]: 2025-10-02 12:49:05.951 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:49:06 np0005466030 nova_compute[230518]: 2025-10-02 12:49:06.051 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:06.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:49:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2052356705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:49:06 np0005466030 nova_compute[230518]: 2025-10-02 12:49:06.480 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:06 np0005466030 nova_compute[230518]: 2025-10-02 12:49:06.488 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:49:06 np0005466030 nova_compute[230518]: 2025-10-02 12:49:06.525 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:49:06 np0005466030 nova_compute[230518]: 2025-10-02 12:49:06.564 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:49:06 np0005466030 nova_compute[230518]: 2025-10-02 12:49:06.564 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:07 np0005466030 nova_compute[230518]: 2025-10-02 12:49:07.563 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:07 np0005466030 nova_compute[230518]: 2025-10-02 12:49:07.564 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:07 np0005466030 nova_compute[230518]: 2025-10-02 12:49:07.565 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:07 np0005466030 nova_compute[230518]: 2025-10-02 12:49:07.565 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:49:07 np0005466030 nova_compute[230518]: 2025-10-02 12:49:07.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:08.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:08 np0005466030 nova_compute[230518]: 2025-10-02 12:49:08.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:10 np0005466030 nova_compute[230518]: 2025-10-02 12:49:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:10.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:49:11Z|00546|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct  2 08:49:11 np0005466030 nova_compute[230518]: 2025-10-02 12:49:11.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:12 np0005466030 nova_compute[230518]: 2025-10-02 12:49:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:12 np0005466030 nova_compute[230518]: 2025-10-02 12:49:12.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:49:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:12.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:12.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:12 np0005466030 nova_compute[230518]: 2025-10-02 12:49:12.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:49:12 np0005466030 nova_compute[230518]: 2025-10-02 12:49:12.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:49:12 np0005466030 nova_compute[230518]: 2025-10-02 12:49:12.347 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:49:12 np0005466030 nova_compute[230518]: 2025-10-02 12:49:12.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:13 np0005466030 nova_compute[230518]: 2025-10-02 12:49:13.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:14.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:14.724 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:14.725 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:49:14 np0005466030 nova_compute[230518]: 2025-10-02 12:49:14.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:16.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:17 np0005466030 nova_compute[230518]: 2025-10-02 12:49:17.664 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:49:17 np0005466030 nova_compute[230518]: 2025-10-02 12:49:17.685 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:49:17 np0005466030 nova_compute[230518]: 2025-10-02 12:49:17.685 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:49:17 np0005466030 nova_compute[230518]: 2025-10-02 12:49:17.686 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:17 np0005466030 nova_compute[230518]: 2025-10-02 12:49:17.686 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:49:17 np0005466030 nova_compute[230518]: 2025-10-02 12:49:17.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:18.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:18 np0005466030 podman[282391]: 2025-10-02 12:49:18.799227734 +0000 UTC m=+0.049213655 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 08:49:18 np0005466030 podman[282390]: 2025-10-02 12:49:18.82715196 +0000 UTC m=+0.080710254 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:49:18 np0005466030 nova_compute[230518]: 2025-10-02 12:49:18.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:19.727 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:49:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:20.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:20.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:22.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:22 np0005466030 nova_compute[230518]: 2025-10-02 12:49:22.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:23 np0005466030 nova_compute[230518]: 2025-10-02 12:49:23.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:24.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:24.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:26.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:26.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:27 np0005466030 nova_compute[230518]: 2025-10-02 12:49:27.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:28.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:28 np0005466030 nova_compute[230518]: 2025-10-02 12:49:28.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:29 np0005466030 podman[282436]: 2025-10-02 12:49:29.806191334 +0000 UTC m=+0.057027360 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible)
Oct  2 08:49:29 np0005466030 podman[282435]: 2025-10-02 12:49:29.826148571 +0000 UTC m=+0.081501399 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 08:49:29 np0005466030 nova_compute[230518]: 2025-10-02 12:49:29.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:30.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:32.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:32 np0005466030 nova_compute[230518]: 2025-10-02 12:49:32.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:33 np0005466030 nova_compute[230518]: 2025-10-02 12:49:33.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:34.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:34.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:34 np0005466030 nova_compute[230518]: 2025-10-02 12:49:34.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:36.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:36.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:37 np0005466030 nova_compute[230518]: 2025-10-02 12:49:37.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:49:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:38.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:49:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:38.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:38 np0005466030 nova_compute[230518]: 2025-10-02 12:49:38.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:40.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:49:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:40.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:49:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:42.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:42.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:42 np0005466030 nova_compute[230518]: 2025-10-02 12:49:42.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:49:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:49:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:49:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:49:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 08:49:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 08:49:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 08:49:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:49:43 np0005466030 nova_compute[230518]: 2025-10-02 12:49:43.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:44.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:44.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:49:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/574049216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:46.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:49:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:46.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:49:47 np0005466030 nova_compute[230518]: 2025-10-02 12:49:47.264 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:47 np0005466030 nova_compute[230518]: 2025-10-02 12:49:47.264 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:47 np0005466030 nova_compute[230518]: 2025-10-02 12:49:47.293 2 DEBUG nova.objects.instance [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:47 np0005466030 nova_compute[230518]: 2025-10-02 12:49:47.377 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:47 np0005466030 nova_compute[230518]: 2025-10-02 12:49:47.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.076 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.076 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.076 2 INFO nova.compute.manager [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attaching volume ada5d5be-9d4c-4653-ac57-931c6322dea6 to /dev/vdb#033[00m
Oct  2 08:49:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:48.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:48.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.597 2 DEBUG os_brick.utils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.598 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.615 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.615 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ec968f45-d8f3-492e-b013-bd45c10a426d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.616 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.625 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.625 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[71e84f13-595d-4e40-bfeb-1d35f66d76c5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.627 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.637 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.638 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8bcf75-8d91-4b33-8141-549d18f6ec30]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.639 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[59be481c-d169-4a5b-8ac1-296b56430c58]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.639 2 DEBUG oslo_concurrency.processutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.671 2 DEBUG oslo_concurrency.processutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.675 2 DEBUG os_brick.initiator.connectors.lightos [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.676 2 DEBUG os_brick.initiator.connectors.lightos [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.676 2 DEBUG os_brick.initiator.connectors.lightos [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.677 2 DEBUG os_brick.utils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.677 2 DEBUG nova.virt.block_device [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating existing volume attachment record: ccf2a014-7375-4c5a-86ef-fd81dce50e93 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:49:48 np0005466030 nova_compute[230518]: 2025-10-02 12:49:48.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:49 np0005466030 podman[282613]: 2025-10-02 12:49:49.806074077 +0000 UTC m=+0.052975233 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:49:49 np0005466030 podman[282612]: 2025-10-02 12:49:49.831808365 +0000 UTC m=+0.081703505 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:49:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:50.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:50 np0005466030 nova_compute[230518]: 2025-10-02 12:49:50.322 2 DEBUG nova.objects.instance [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:50 np0005466030 nova_compute[230518]: 2025-10-02 12:49:50.400 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to attach volume ada5d5be-9d4c-4653-ac57-931c6322dea6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:49:50 np0005466030 nova_compute[230518]: 2025-10-02 12:49:50.405 2 DEBUG nova.virt.libvirt.guest [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:49:50 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-ada5d5be-9d4c-4653-ac57-931c6322dea6">
Oct  2 08:49:50 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 08:49:50 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:  </auth>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:49:50 np0005466030 nova_compute[230518]:  <serial>ada5d5be-9d4c-4653-ac57-931c6322dea6</serial>
Oct  2 08:49:50 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:49:50 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:49:50 np0005466030 nova_compute[230518]: 2025-10-02 12:49:50.899 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:50 np0005466030 nova_compute[230518]: 2025-10-02 12:49:50.900 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:50 np0005466030 nova_compute[230518]: 2025-10-02 12:49:50.900 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:50 np0005466030 nova_compute[230518]: 2025-10-02 12:49:50.900 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:fa:2f:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:49:51 np0005466030 nova_compute[230518]: 2025-10-02 12:49:51.415 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:52.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:52 np0005466030 nova_compute[230518]: 2025-10-02 12:49:52.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:49:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.327 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.327 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.345 2 DEBUG nova.objects.instance [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.394 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.833 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.834 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.834 2 INFO nova.compute.manager [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attaching volume 77e42bdf-1989-460b-aa47-82eb53d89208 to /dev/vdc#033[00m
Oct  2 08:49:53 np0005466030 nova_compute[230518]: 2025-10-02 12:49:53.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:54.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.142 2 DEBUG os_brick.utils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.143 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.155 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.155 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[52cde937-70ed-46a6-9360-d3ad2dbc8d4b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.157 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.164 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.164 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[39e039a9-f8c4-41d1-8ccb-da1907114ec8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.166 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.179 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.179 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[4484f38d-3e61-4f77-849c-ac2c2c930c8f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:54.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.181 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c21d13-e3ea-4886-a40f-186679155cf7]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.181 2 DEBUG oslo_concurrency.processutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.208 2 DEBUG oslo_concurrency.processutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.211 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.211 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.212 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.212 2 DEBUG os_brick.utils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:49:54 np0005466030 nova_compute[230518]: 2025-10-02 12:49:54.213 2 DEBUG nova.virt.block_device [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating existing volume attachment record: 5fd89014-a8a9-427e-ba08-f0b8efc311f1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:49:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:49:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1772711169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:49:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:55.094 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:49:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:49:55.095 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.139 2 DEBUG nova.objects.instance [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.187 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to attach volume 77e42bdf-1989-460b-aa47-82eb53d89208 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.189 2 DEBUG nova.virt.libvirt.guest [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 08:49:55 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-77e42bdf-1989-460b-aa47-82eb53d89208">
Oct  2 08:49:55 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 08:49:55 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:  </auth>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:49:55 np0005466030 nova_compute[230518]:  <serial>77e42bdf-1989-460b-aa47-82eb53d89208</serial>
Oct  2 08:49:55 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:49:55 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.371 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.372 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.372 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.373 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.373 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:fa:2f:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:49:55 np0005466030 nova_compute[230518]: 2025-10-02 12:49:55.735 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:49:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:56.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:56 np0005466030 nova_compute[230518]: 2025-10-02 12:49:56.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:57 np0005466030 nova_compute[230518]: 2025-10-02 12:49:57.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:49:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:49:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:58.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:49:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:49:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:58.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:49:58 np0005466030 nova_compute[230518]: 2025-10-02 12:49:58.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:00.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:00 np0005466030 nova_compute[230518]: 2025-10-02 12:50:00.177 2 DEBUG nova.compute.manager [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:00 np0005466030 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG nova.compute.manager [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:00 np0005466030 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG oslo_concurrency.lockutils [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:00 np0005466030 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG oslo_concurrency.lockutils [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:00 np0005466030 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG nova.network.neutron [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:00.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 08:50:00 np0005466030 podman[282754]: 2025-10-02 12:50:00.83766939 +0000 UTC m=+0.074630522 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:50:00 np0005466030 podman[282755]: 2025-10-02 12:50:00.843218355 +0000 UTC m=+0.076368237 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:50:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:01.096 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:01 np0005466030 nova_compute[230518]: 2025-10-02 12:50:01.968 2 DEBUG nova.network.neutron [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:01 np0005466030 nova_compute[230518]: 2025-10-02 12:50:01.969 2 DEBUG nova.network.neutron [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:02.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:02 np0005466030 nova_compute[230518]: 2025-10-02 12:50:02.239 2 DEBUG oslo_concurrency.lockutils [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:02 np0005466030 nova_compute[230518]: 2025-10-02 12:50:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:03 np0005466030 nova_compute[230518]: 2025-10-02 12:50:03.871 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:03 np0005466030 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:03 np0005466030 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:03 np0005466030 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:03 np0005466030 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:03 np0005466030 nova_compute[230518]: 2025-10-02 12:50:03.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:04.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:04.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:04 np0005466030 nova_compute[230518]: 2025-10-02 12:50:04.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:50:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:50:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:50:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.367 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.367 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.573 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.666 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.667 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.752 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.752 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.907 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.908 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.913 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:50:05 np0005466030 nova_compute[230518]: 2025-10-02 12:50:05.914 2 INFO nova.compute.claims [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:06.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:06.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.368 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131426592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.809 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.815 2 DEBUG nova.compute.provider_tree [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.844 2 DEBUG nova.scheduler.client.report [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.877 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.878 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.881 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.881 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.975 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:50:06 np0005466030 nova_compute[230518]: 2025-10-02 12:50:06.976 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.014 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.040 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.115 2 INFO nova.virt.block_device [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Booting with volume a7bdd212-0d34-40b8-9af9-06388d215028 at /dev/vda#033[00m
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1040149648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.309 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.365 2 DEBUG nova.policy [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'af2648eefb594bc49309cccf408f7ae1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.478 2 DEBUG os_brick.utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.479 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.489 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.489 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f8600f8e-62f1-46b6-8578-cff6f6132896]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.490 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.499 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.499 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[da7599f8-b826-45c7-af89-fe1f4844fdb2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.500 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.515 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.515 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf6d3d8-47d0-4b07-8a6b-386afdab48fd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.516 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[4696f10a-e0b9-49cc-8a89-295d857af0e2]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.516 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.542 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.544 2 DEBUG os_brick.initiator.connectors.lightos [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.544 2 DEBUG os_brick.initiator.connectors.lightos [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.544 2 DEBUG os_brick.initiator.connectors.lightos [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.545 2 DEBUG os_brick.utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.545 2 DEBUG nova.virt.block_device [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating existing volume attachment record: 215e435c-8279-4bbc-bbb1-8c2a849c9afe _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.591 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.592 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4011MB free_disk=20.896709442138672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3b348c58-f179-41db-bd79-1fdea0ade389 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 33780b49-b5a1-4f3f-a6c5-a00011d53718 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:07.822863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407822893, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2214, "num_deletes": 260, "total_data_size": 5186037, "memory_usage": 5262624, "flush_reason": "Manual Compaction"}
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407964766, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 3348140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52347, "largest_seqno": 54556, "table_properties": {"data_size": 3339020, "index_size": 5614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19755, "raw_average_key_size": 20, "raw_value_size": 3320426, "raw_average_value_size": 3451, "num_data_blocks": 244, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409237, "oldest_key_time": 1759409237, "file_creation_time": 1759409407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 141980 microseconds, and 6876 cpu microseconds.
Oct  2 08:50:07 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:07 np0005466030 nova_compute[230518]: 2025-10-02 12:50:07.993 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:08.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:07.964834) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 3348140 bytes OK
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:07.964862) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.146848) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.146875) EVENT_LOG_v1 {"time_micros": 1759409408146868, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.146895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 5175955, prev total WAL file size 5175955, number of live WAL files 2.
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.148071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303134' seq:0, type:0; will stop at (end)
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(3269KB)], [102(10MB)]
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408148136, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14507665, "oldest_snapshot_seqno": -1}
Oct  2 08:50:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:08.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8093 keys, 14348471 bytes, temperature: kUnknown
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408384749, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 14348471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14291160, "index_size": 35951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208420, "raw_average_key_size": 25, "raw_value_size": 14143924, "raw_average_value_size": 1747, "num_data_blocks": 1427, "num_entries": 8093, "num_filter_entries": 8093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.384971) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14348471 bytes
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.468915) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.3 rd, 60.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.6 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(8.6) write-amplify(4.3) OK, records in: 8633, records dropped: 540 output_compression: NoCompression
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.468957) EVENT_LOG_v1 {"time_micros": 1759409408468942, "job": 64, "event": "compaction_finished", "compaction_time_micros": 236667, "compaction_time_cpu_micros": 55431, "output_level": 6, "num_output_files": 1, "total_output_size": 14348471, "num_input_records": 8633, "num_output_records": 8093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408469656, "job": 64, "event": "table_file_deletion", "file_number": 104}
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408471263, "job": 64, "event": "table_file_deletion", "file_number": 102}
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.147967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/793802527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:08 np0005466030 nova_compute[230518]: 2025-10-02 12:50:08.500 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:08 np0005466030 nova_compute[230518]: 2025-10-02 12:50:08.507 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:08 np0005466030 nova_compute[230518]: 2025-10-02 12:50:08.553 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:08 np0005466030 nova_compute[230518]: 2025-10-02 12:50:08.619 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:50:08 np0005466030 nova_compute[230518]: 2025-10-02 12:50:08.620 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:08 np0005466030 nova_compute[230518]: 2025-10-02 12:50:08.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:50:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755918285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.349 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.350 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.380 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.620 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.621 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.621 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.621 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.873 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.875 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.876 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating image(s)#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.876 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.877 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Ensure instance console log exists: /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.877 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.877 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:09 np0005466030 nova_compute[230518]: 2025-10-02 12:50:09.878 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:10 np0005466030 nova_compute[230518]: 2025-10-02 12:50:10.582 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Successfully created port: 6f16f975-1155-4931-9798-72b46e8ca37f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.474 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Successfully updated port: 6f16f975-1155-4931-9798-72b46e8ca37f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.506 2 DEBUG nova.compute.manager [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.507 2 DEBUG nova.compute.manager [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.507 2 DEBUG oslo_concurrency.lockutils [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.508 2 DEBUG oslo_concurrency.lockutils [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.508 2 DEBUG nova.network.neutron [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.511 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.512 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquired lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.512 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.623 2 DEBUG nova.compute.manager [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.623 2 DEBUG nova.compute.manager [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing instance network info cache due to event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.623 2 DEBUG oslo_concurrency.lockutils [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:11 np0005466030 nova_compute[230518]: 2025-10-02 12:50:11.780 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:50:12 np0005466030 nova_compute[230518]: 2025-10-02 12:50:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:12 np0005466030 nova_compute[230518]: 2025-10-02 12:50:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:12.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Oct  2 08:50:12 np0005466030 nova_compute[230518]: 2025-10-02 12:50:12.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:13 np0005466030 nova_compute[230518]: 2025-10-02 12:50:13.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:50:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:14.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.326 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.397 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.514 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.515 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.515 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.515 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.541 2 DEBUG nova.network.neutron [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.542 2 DEBUG nova.network.neutron [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.915 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Releasing lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.915 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance network_info: |[{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.916 2 DEBUG oslo_concurrency.lockutils [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.916 2 DEBUG nova.network.neutron [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.919 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start _get_guest_xml network_info=[{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'attached_at': '', 'detached_at': '', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'serial': 'a7bdd212-0d34-40b8-9af9-06388d215028'}, 'boot_index': 0, 'attachment_id': '215e435c-8279-4bbc-bbb1-8c2a849c9afe', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.922 2 WARNING nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.926 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.927 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.931 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.932 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.933 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.933 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.936 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.936 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.962 2 DEBUG nova.storage.rbd_utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.965 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:14 np0005466030 nova_compute[230518]: 2025-10-02 12:50:14.995 2 DEBUG oslo_concurrency.lockutils [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:50:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2961129839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.571 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.633 2 DEBUG nova.virt.libvirt.vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1058292720',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.634 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.634 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.635 2 DEBUG nova.objects.instance [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.802 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <uuid>33780b49-b5a1-4f3f-a6c5-a00011d53718</uuid>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <name>instance-00000088</name>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsV293TestJSON-server-1058292720</nova:name>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:50:14</nova:creationTime>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:user uuid="af2648eefb594bc49309cccf408f7ae1">tempest-ServerActionsV293TestJSON-365577023-project-member</nova:user>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:project uuid="1308a7eb298f49baaeaf3dc3a6acf592">tempest-ServerActionsV293TestJSON-365577023</nova:project>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <nova:port uuid="6f16f975-1155-4931-9798-72b46e8ca37f">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <entry name="serial">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <entry name="uuid">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <serial>a7bdd212-0d34-40b8-9af9-06388d215028</serial>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:9e:0c:7f"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <target dev="tap6f16f975-11"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log" append="off"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:50:15 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:50:15 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:50:15 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:50:15 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.803 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Preparing to wait for external event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.804 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.804 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.805 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.806 2 DEBUG nova.virt.libvirt.vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1058292720',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.806 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.807 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.807 2 DEBUG os_vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.813 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f16f975-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f16f975-11, col_values=(('external_ids', {'iface-id': '6f16f975-1155-4931-9798-72b46e8ca37f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:0c:7f', 'vm-uuid': '33780b49-b5a1-4f3f-a6c5-a00011d53718'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:15 np0005466030 NetworkManager[44960]: <info>  [1759409415.8162] manager: (tap6f16f975-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.826 2 INFO os_vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.944 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.944 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.944 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No VIF found with MAC fa:16:3e:9e:0c:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.945 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Using config drive#033[00m
Oct  2 08:50:15 np0005466030 nova_compute[230518]: 2025-10-02 12:50:15.971 2 DEBUG nova.storage.rbd_utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:16.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:16.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:17 np0005466030 nova_compute[230518]: 2025-10-02 12:50:17.136 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating config drive at /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config#033[00m
Oct  2 08:50:17 np0005466030 nova_compute[230518]: 2025-10-02 12:50:17.142 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2tnfey8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:17 np0005466030 nova_compute[230518]: 2025-10-02 12:50:17.279 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2tnfey8" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:17 np0005466030 nova_compute[230518]: 2025-10-02 12:50:17.307 2 DEBUG nova.storage.rbd_utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:50:17 np0005466030 nova_compute[230518]: 2025-10-02 12:50:17.310 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:18.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.160 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.850s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.162 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting local config drive /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config because it was imported into RBD.#033[00m
Oct  2 08:50:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:18.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:18 np0005466030 kernel: tap6f16f975-11: entered promiscuous mode
Oct  2 08:50:18 np0005466030 NetworkManager[44960]: <info>  [1759409418.2259] manager: (tap6f16f975-11): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Oct  2 08:50:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:18Z|00547|binding|INFO|Claiming lport 6f16f975-1155-4931-9798-72b46e8ca37f for this chassis.
Oct  2 08:50:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:18Z|00548|binding|INFO|6f16f975-1155-4931-9798-72b46e8ca37f: Claiming fa:16:3e:9e:0c:7f 10.100.0.13
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:18Z|00549|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f ovn-installed in OVS
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:18 np0005466030 systemd-machined[188247]: New machine qemu-65-instance-00000088.
Oct  2 08:50:18 np0005466030 systemd[1]: Started Virtual Machine qemu-65-instance-00000088.
Oct  2 08:50:18 np0005466030 systemd-udevd[282980]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:50:18 np0005466030 NetworkManager[44960]: <info>  [1759409418.3069] device (tap6f16f975-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:50:18 np0005466030 NetworkManager[44960]: <info>  [1759409418.3080] device (tap6f16f975-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:50:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:18Z|00550|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f up in Southbound
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.308 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.308 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 bound to our chassis#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.310 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ea35968-5cdb-414e-9226-6ba534628944#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.324 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f8e7c7-193b-4cd3-af3f-722a4fc12903]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.325 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ea35968-51 in ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.327 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ea35968-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7664bc5c-5cdc-42c6-8dae-4f5d434086e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43d440a1-8985-4fae-94df-53864151264a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.342 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[92850906-5c94-4cee-b28f-91627263d12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0bbcca-f28a-473e-a923-71d9e804834a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.400 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0bb5a2-3a5e-4317-b161-e424f77bd78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[352edc2e-ac62-4e83-b057-734c2f41d389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 systemd-udevd[282982]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:50:18 np0005466030 NetworkManager[44960]: <info>  [1759409418.4076] manager: (tap1ea35968-50): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.439 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[082822ec-d233-4d5b-8edf-6dedad58fabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.442 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e758f6d9-d845-4ec6-80af-5645ae208a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 NetworkManager[44960]: <info>  [1759409418.4667] device (tap1ea35968-50): carrier: link connected
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.473 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[51902685-bb4e-44c0-a91f-158b37888215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.492 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1ce06f-eb4c-4ef5-b53c-5af230728cdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728202, 'reachable_time': 29585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283013, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.507 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[20d8bcbf-3efc-419f-b2cc-cc5a41b28e1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:b578'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728202, 'tstamp': 728202}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283014, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.524 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[979e5d35-6d20-4c64-9f28-083781d3b288]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728202, 'reachable_time': 29585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283015, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.533 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0a390b15-df31-49a9-bbda-800139ee4e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.615 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bff97ed5-702b-45b0-b6c5-0a84ef6e1121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.617 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.617 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.617 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ea35968-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:18 np0005466030 kernel: tap1ea35968-50: entered promiscuous mode
Oct  2 08:50:18 np0005466030 NetworkManager[44960]: <info>  [1759409418.6205] manager: (tap1ea35968-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct  2 08:50:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:18Z|00551|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ea35968-50, col_values=(('external_ids', {'iface-id': '656124c9-fbda-4e47-b94b-fbe1ed24070e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.628 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.629 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b832282-6a93-462c-9da3-d7d2f17ae193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.630 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-1ea35968-5cdb-414e-9226-6ba534628944
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 1ea35968-5cdb-414e-9226-6ba534628944
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:50:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.630 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'env', 'PROCESS_TAG=haproxy-1ea35968-5cdb-414e-9226-6ba534628944', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ea35968-5cdb-414e-9226-6ba534628944.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.651 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.651 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.652 2 DEBUG nova.network.neutron [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updated VIF entry in instance network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.653 2 DEBUG nova.network.neutron [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.768 2 DEBUG oslo_concurrency.lockutils [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.769 2 DEBUG oslo_concurrency.lockutils [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.769 2 DEBUG oslo_concurrency.lockutils [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.884 2 INFO nova.compute.manager [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Detaching volume ada5d5be-9d4c-4653-ac57-931c6322dea6#033[00m
Oct  2 08:50:18 np0005466030 nova_compute[230518]: 2025-10-02 12:50:18.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:19 np0005466030 podman[283063]: 2025-10-02 12:50:18.964557281 +0000 UTC m=+0.028921608 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.114 2 DEBUG nova.compute.manager [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.115 2 DEBUG oslo_concurrency.lockutils [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.115 2 DEBUG oslo_concurrency.lockutils [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.115 2 DEBUG oslo_concurrency.lockutils [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.116 2 DEBUG nova.compute.manager [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Processing event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.151 2 INFO nova.virt.block_device [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to driver detach volume ada5d5be-9d4c-4653-ac57-931c6322dea6 from mountpoint /dev/vdb#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.166 2 DEBUG nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdb from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.168 2 DEBUG nova.virt.libvirt.guest [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-ada5d5be-9d4c-4653-ac57-931c6322dea6">
Oct  2 08:50:19 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <serial>ada5d5be-9d4c-4653-ac57-931c6322dea6</serial>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:50:19 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.329 2 INFO nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config.#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.330 2 DEBUG nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:50:19 np0005466030 podman[283063]: 2025-10-02 12:50:19.330491352 +0000 UTC m=+0.394855659 container create 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.330 2 DEBUG nova.virt.libvirt.guest [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-ada5d5be-9d4c-4653-ac57-931c6322dea6">
Oct  2 08:50:19 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <serial>ada5d5be-9d4c-4653-ac57-931c6322dea6</serial>
Oct  2 08:50:19 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 08:50:19 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:50:19 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:50:19 np0005466030 systemd[1]: Started libpod-conmon-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc.scope.
Oct  2 08:50:19 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:50:19 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2275299b59a6f31b3f5c4c1bc1ae63291b4b9fabe72845925a94cee91153691/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.605 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409419.6046026, 3b348c58-f179-41db-bd79-1fdea0ade389 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.607 2 DEBUG nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3b348c58-f179-41db-bd79-1fdea0ade389 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.611 2 INFO nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config.#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.689 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.691 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409419.6885846, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.692 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Started (Lifecycle Event)#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.695 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.699 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance spawned successfully.#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.700 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:50:19 np0005466030 podman[283063]: 2025-10-02 12:50:19.745689808 +0000 UTC m=+0.810054135 container init 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:50:19 np0005466030 podman[283063]: 2025-10-02 12:50:19.752475751 +0000 UTC m=+0.816840058 container start 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:50:19 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : New worker (283112) forked
Oct  2 08:50:19 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : Loading success.
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.906 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.907 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.907 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.908 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.908 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.908 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.914 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.919 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.960 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.960 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409419.6887245, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.961 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:50:19 np0005466030 nova_compute[230518]: 2025-10-02 12:50:19.978 2 DEBUG nova.objects.instance [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.035 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.040 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409419.6925225, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.041 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.072 2 INFO nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 10.20 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.073 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.075 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.078 2 DEBUG oslo_concurrency.lockutils [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.084 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.125 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:50:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.166 2 INFO nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 14.31 seconds to build instance.#033[00m
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.191 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:20.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:20 np0005466030 podman[283122]: 2025-10-02 12:50:20.814367615 +0000 UTC m=+0.066232449 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:50:20 np0005466030 nova_compute[230518]: 2025-10-02 12:50:20.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:20 np0005466030 podman[283121]: 2025-10-02 12:50:20.841127805 +0000 UTC m=+0.094545268 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 08:50:21 np0005466030 nova_compute[230518]: 2025-10-02 12:50:21.274 2 DEBUG nova.compute.manager [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:21 np0005466030 nova_compute[230518]: 2025-10-02 12:50:21.274 2 DEBUG oslo_concurrency.lockutils [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:21 np0005466030 nova_compute[230518]: 2025-10-02 12:50:21.275 2 DEBUG oslo_concurrency.lockutils [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:21 np0005466030 nova_compute[230518]: 2025-10-02 12:50:21.275 2 DEBUG oslo_concurrency.lockutils [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:21 np0005466030 nova_compute[230518]: 2025-10-02 12:50:21.275 2 DEBUG nova.compute.manager [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:21 np0005466030 nova_compute[230518]: 2025-10-02 12:50:21.276 2 WARNING nova.compute.manager [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state None.#033[00m
Oct  2 08:50:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:23 np0005466030 nova_compute[230518]: 2025-10-02 12:50:23.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:24.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.178 2 DEBUG nova.compute.manager [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG nova.compute.manager [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing instance network info cache due to event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG oslo_concurrency.lockutils [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG oslo_concurrency.lockutils [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG nova.network.neutron [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.542 2 DEBUG oslo_concurrency.lockutils [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.543 2 DEBUG oslo_concurrency.lockutils [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.562 2 INFO nova.compute.manager [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Detaching volume 77e42bdf-1989-460b-aa47-82eb53d89208#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.835 2 INFO nova.virt.block_device [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to driver detach volume 77e42bdf-1989-460b-aa47-82eb53d89208 from mountpoint /dev/vdc#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.849 2 DEBUG nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdc from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.850 2 DEBUG nova.virt.libvirt.guest [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-77e42bdf-1989-460b-aa47-82eb53d89208">
Oct  2 08:50:24 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <serial>77e42bdf-1989-460b-aa47-82eb53d89208</serial>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:50:24 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.863 2 INFO nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config.#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.863 2 DEBUG nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:50:24 np0005466030 nova_compute[230518]: 2025-10-02 12:50:24.864 2 DEBUG nova.virt.libvirt.guest [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-77e42bdf-1989-460b-aa47-82eb53d89208">
Oct  2 08:50:24 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  </source>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <target dev="vdc" bus="virtio"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <serial>77e42bdf-1989-460b-aa47-82eb53d89208</serial>
Oct  2 08:50:24 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct  2 08:50:24 np0005466030 nova_compute[230518]: </disk>
Oct  2 08:50:24 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:50:25 np0005466030 nova_compute[230518]: 2025-10-02 12:50:25.369 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409425.368921, 3b348c58-f179-41db-bd79-1fdea0ade389 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:50:25 np0005466030 nova_compute[230518]: 2025-10-02 12:50:25.370 2 DEBUG nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 3b348c58-f179-41db-bd79-1fdea0ade389 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:50:25 np0005466030 nova_compute[230518]: 2025-10-02 12:50:25.372 2 INFO nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config.#033[00m
Oct  2 08:50:25 np0005466030 nova_compute[230518]: 2025-10-02 12:50:25.640 2 DEBUG nova.objects.instance [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:25 np0005466030 nova_compute[230518]: 2025-10-02 12:50:25.648 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:25 np0005466030 nova_compute[230518]: 2025-10-02 12:50:25.698 2 DEBUG oslo_concurrency.lockutils [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:25 np0005466030 nova_compute[230518]: 2025-10-02 12:50:25.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:25.947 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:25.948 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:26.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:26 np0005466030 nova_compute[230518]: 2025-10-02 12:50:26.712 2 DEBUG nova.network.neutron [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updated VIF entry in instance network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:26 np0005466030 nova_compute[230518]: 2025-10-02 12:50:26.713 2 DEBUG nova.network.neutron [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:26 np0005466030 nova_compute[230518]: 2025-10-02 12:50:26.742 2 DEBUG oslo_concurrency.lockutils [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:27 np0005466030 nova_compute[230518]: 2025-10-02 12:50:27.744 2 DEBUG nova.compute.manager [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:27 np0005466030 nova_compute[230518]: 2025-10-02 12:50:27.744 2 DEBUG nova.compute.manager [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:27 np0005466030 nova_compute[230518]: 2025-10-02 12:50:27.745 2 DEBUG oslo_concurrency.lockutils [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:27 np0005466030 nova_compute[230518]: 2025-10-02 12:50:27.745 2 DEBUG oslo_concurrency.lockutils [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:27 np0005466030 nova_compute[230518]: 2025-10-02 12:50:27.745 2 DEBUG nova.network.neutron [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.083011) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428083082, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 474, "num_deletes": 251, "total_data_size": 555599, "memory_usage": 565128, "flush_reason": "Manual Compaction"}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:28.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428253087, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 366281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54561, "largest_seqno": 55030, "table_properties": {"data_size": 363680, "index_size": 637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 358421, "raw_average_value_size": 1051, "num_data_blocks": 28, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409408, "oldest_key_time": 1759409408, "file_creation_time": 1759409428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 170135 microseconds, and 1816 cpu microseconds.
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.253153) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 366281 bytes OK
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.253176) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486364) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486408) EVENT_LOG_v1 {"time_micros": 1759409428486398, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 552690, prev total WAL file size 552690, number of live WAL files 2.
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.487035) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(357KB)], [105(13MB)]
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428487085, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 14714752, "oldest_snapshot_seqno": -1}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 7920 keys, 12844925 bytes, temperature: kUnknown
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428691515, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 12844925, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12790121, "index_size": 33892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205573, "raw_average_key_size": 25, "raw_value_size": 12647169, "raw_average_value_size": 1596, "num_data_blocks": 1333, "num_entries": 7920, "num_filter_entries": 7920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.691746) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12844925 bytes
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.899654) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.0 rd, 62.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.7 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(75.2) write-amplify(35.1) OK, records in: 8434, records dropped: 514 output_compression: NoCompression
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.899697) EVENT_LOG_v1 {"time_micros": 1759409428899681, "job": 66, "event": "compaction_finished", "compaction_time_micros": 204496, "compaction_time_cpu_micros": 28703, "output_level": 6, "num_output_files": 1, "total_output_size": 12844925, "num_input_records": 8434, "num_output_records": 7920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428899947, "job": 66, "event": "table_file_deletion", "file_number": 107}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428902614, "job": 66, "event": "table_file_deletion", "file_number": 105}
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:50:28 np0005466030 nova_compute[230518]: 2025-10-02 12:50:28.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:30 np0005466030 nova_compute[230518]: 2025-10-02 12:50:30.058 2 DEBUG nova.network.neutron [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:30 np0005466030 nova_compute[230518]: 2025-10-02 12:50:30.059 2 DEBUG nova.network.neutron [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:30 np0005466030 nova_compute[230518]: 2025-10-02 12:50:30.153 2 DEBUG oslo_concurrency.lockutils [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:30.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:30 np0005466030 nova_compute[230518]: 2025-10-02 12:50:30.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:50:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2356732532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:50:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:50:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:50:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:50:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:50:31 np0005466030 podman[283170]: 2025-10-02 12:50:31.55979181 +0000 UTC m=+0.057611058 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 08:50:31 np0005466030 podman[283171]: 2025-10-02 12:50:31.566958825 +0000 UTC m=+0.062140650 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  2 08:50:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:32.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:32.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:33 np0005466030 nova_compute[230518]: 2025-10-02 12:50:33.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:35 np0005466030 nova_compute[230518]: 2025-10-02 12:50:35.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:36.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:36.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Oct  2 08:50:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:36Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:0c:7f 10.100.0.13
Oct  2 08:50:36 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:36Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:0c:7f 10.100.0.13
Oct  2 08:50:36 np0005466030 nova_compute[230518]: 2025-10-02 12:50:36.974 2 DEBUG nova.compute.manager [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:36 np0005466030 nova_compute[230518]: 2025-10-02 12:50:36.974 2 DEBUG nova.compute.manager [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:36 np0005466030 nova_compute[230518]: 2025-10-02 12:50:36.975 2 DEBUG oslo_concurrency.lockutils [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:36 np0005466030 nova_compute[230518]: 2025-10-02 12:50:36.975 2 DEBUG oslo_concurrency.lockutils [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:36 np0005466030 nova_compute[230518]: 2025-10-02 12:50:36.975 2 DEBUG nova.network.neutron [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:38.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:38 np0005466030 nova_compute[230518]: 2025-10-02 12:50:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:40 np0005466030 nova_compute[230518]: 2025-10-02 12:50:40.406 2 DEBUG nova.network.neutron [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:40 np0005466030 nova_compute[230518]: 2025-10-02 12:50:40.406 2 DEBUG nova.network.neutron [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:40 np0005466030 nova_compute[230518]: 2025-10-02 12:50:40.427 2 DEBUG oslo_concurrency.lockutils [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:40 np0005466030 nova_compute[230518]: 2025-10-02 12:50:40.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:42.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:42.869 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:42.870 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:50:42 np0005466030 nova_compute[230518]: 2025-10-02 12:50:42.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:43 np0005466030 nova_compute[230518]: 2025-10-02 12:50:43.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:44.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:44.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:45 np0005466030 nova_compute[230518]: 2025-10-02 12:50:45.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:45.872 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:46.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:46.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:47 np0005466030 nova_compute[230518]: 2025-10-02 12:50:47.192 2 DEBUG nova.compute.manager [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:47 np0005466030 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG nova.compute.manager [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:50:47 np0005466030 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG oslo_concurrency.lockutils [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:50:47 np0005466030 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG oslo_concurrency.lockutils [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:50:47 np0005466030 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG nova.network.neutron [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:50:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Oct  2 08:50:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:48.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:48.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:48 np0005466030 nova_compute[230518]: 2025-10-02 12:50:48.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:49 np0005466030 nova_compute[230518]: 2025-10-02 12:50:49.985 2 DEBUG nova.network.neutron [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:50:49 np0005466030 nova_compute[230518]: 2025-10-02 12:50:49.985 2 DEBUG nova.network.neutron [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.008 2 DEBUG oslo_concurrency.lockutils [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.053 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.053 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.054 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.054 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.054 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.055 2 INFO nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Terminating instance#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.057 2 DEBUG nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:50:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:50.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:50:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:50:50 np0005466030 kernel: tapa568d61d-68 (unregistering): left promiscuous mode
Oct  2 08:50:50 np0005466030 NetworkManager[44960]: <info>  [1759409450.4351] device (tapa568d61d-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:50:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:50Z|00552|binding|INFO|Releasing lport a568d61d-6863-474f-83f4-ba38b88de19a from this chassis (sb_readonly=0)
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:50Z|00553|binding|INFO|Setting lport a568d61d-6863-474f-83f4-ba38b88de19a down in Southbound
Oct  2 08:50:50 np0005466030 ovn_controller[129257]: 2025-10-02T12:50:50Z|00554|binding|INFO|Removing iface tapa568d61d-68 ovn-installed in OVS
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.458 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:2f:46 10.100.0.7'], port_security=['fa:16:3e:fa:2f:46 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3b348c58-f179-41db-bd79-1fdea0ade389', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a568d61d-6863-474f-83f4-ba38b88de19a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.459 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a568d61d-6863-474f-83f4-ba38b88de19a in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa unbound from our chassis#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.461 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3b4df3-6044-4a53-8039-c9a5c05725aa#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.485 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f092925-d128-455b-b3b3-0386795e3d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:50 np0005466030 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct  2 08:50:50 np0005466030 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000084.scope: Consumed 20.861s CPU time.
Oct  2 08:50:50 np0005466030 systemd-machined[188247]: Machine qemu-64-instance-00000084 terminated.
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.513 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a9840349-afcf-4d15-bf21-ca31eaaaefb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.516 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c19bee2f-af89-4745-b46d-e8690d0179fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.539 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48522065-235e-490e-b6f1-db2d6d16931b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.563 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12b98cfc-fece-4715-b079-3d5fb65925f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 29723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283221, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.580 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[347aa48c-f0f5-4553-bf77-4bacc089eebd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714093, 'tstamp': 714093}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283222, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714097, 'tstamp': 714097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283222, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.581 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.590 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3b4df3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.591 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.591 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3b4df3-60, col_values=(('external_ids', {'iface-id': 'fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.591 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.700 2 INFO nova.virt.libvirt.driver [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance destroyed successfully.#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.700 2 DEBUG nova.objects.instance [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'resources' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.713 2 DEBUG nova.virt.libvirt.vif [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1396980789',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1396980789',id=132,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-g7qut09a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:07Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=3b348c58-f179-41db-bd79-1fdea0ade389,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.713 2 DEBUG nova.network.os_vif_util [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.714 2 DEBUG nova.network.os_vif_util [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.715 2 DEBUG os_vif [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa568d61d-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:50:50 np0005466030 nova_compute[230518]: 2025-10-02 12:50:50.725 2 INFO os_vif [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68')#033[00m
Oct  2 08:50:51 np0005466030 nova_compute[230518]: 2025-10-02 12:50:51.049 2 DEBUG nova.compute.manager [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-unplugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:51 np0005466030 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG oslo_concurrency.lockutils [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:51 np0005466030 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG oslo_concurrency.lockutils [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:51 np0005466030 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG oslo_concurrency.lockutils [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:51 np0005466030 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG nova.compute.manager [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] No waiting events found dispatching network-vif-unplugged-a568d61d-6863-474f-83f4-ba38b88de19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:51 np0005466030 nova_compute[230518]: 2025-10-02 12:50:51.051 2 DEBUG nova.compute.manager [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-unplugged-a568d61d-6863-474f-83f4-ba38b88de19a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:50:51 np0005466030 podman[283253]: 2025-10-02 12:50:51.822198949 +0000 UTC m=+0.065672502 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:50:51 np0005466030 podman[283252]: 2025-10-02 12:50:51.852428957 +0000 UTC m=+0.102159556 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:50:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:52.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.160 2 DEBUG nova.compute.manager [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.161 2 DEBUG oslo_concurrency.lockutils [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.162 2 DEBUG oslo_concurrency.lockutils [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.163 2 DEBUG oslo_concurrency.lockutils [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.163 2 DEBUG nova.compute.manager [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] No waiting events found dispatching network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.163 2 WARNING nova.compute.manager [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received unexpected event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.504 2 INFO nova.virt.libvirt.driver [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deleting instance files /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389_del#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.504 2 INFO nova.virt.libvirt.driver [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deletion of /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389_del complete#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.571 2 INFO nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 3.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.572 2 DEBUG oslo.service.loopingcall [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.572 2 DEBUG nova.compute.manager [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.572 2 DEBUG nova.network.neutron [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:50:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:50:53 np0005466030 nova_compute[230518]: 2025-10-02 12:50:53.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:54.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:54 np0005466030 nova_compute[230518]: 2025-10-02 12:50:54.708 2 DEBUG nova.network.neutron [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:50:54 np0005466030 nova_compute[230518]: 2025-10-02 12:50:54.761 2 INFO nova.compute.manager [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 1.19 seconds to deallocate network for instance.#033[00m
Oct  2 08:50:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:50:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.169 2 INFO nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 0.41 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.264 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.265 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.331 2 DEBUG nova.compute.manager [req-1bcd25cf-2e0d-4347-9ec9-1ef3ea2016b9 req-cdd838c3-8086-4d56-ac8a-b04c142a2482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-deleted-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.375 2 DEBUG oslo_concurrency.processutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:50:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:50:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2193676697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.845 2 DEBUG oslo_concurrency.processutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.852 2 DEBUG nova.compute.provider_tree [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.869 2 DEBUG nova.scheduler.client.report [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.896 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:55 np0005466030 nova_compute[230518]: 2025-10-02 12:50:55.929 2 INFO nova.scheduler.client.report [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Deleted allocations for instance 3b348c58-f179-41db-bd79-1fdea0ade389#033[00m
Oct  2 08:50:56 np0005466030 nova_compute[230518]: 2025-10-02 12:50:56.007 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:50:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:56.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.122 2 INFO nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Rebuilding instance#033[00m
Oct  2 08:50:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:50:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:50:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:50:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:58.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.407 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.420 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.468 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'pci_requests' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.493 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.506 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'resources' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.517 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'migration_context' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.530 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.533 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:50:58 np0005466030 nova_compute[230518]: 2025-10-02 12:50:58.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:00.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:00 np0005466030 nova_compute[230518]: 2025-10-02 12:51:00.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:01 np0005466030 nova_compute[230518]: 2025-10-02 12:51:01.550 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:51:01 np0005466030 podman[283452]: 2025-10-02 12:51:01.830844576 +0000 UTC m=+0.072741293 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct  2 08:51:01 np0005466030 podman[283451]: 2025-10-02 12:51:01.840259571 +0000 UTC m=+0.081718574 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct  2 08:51:02 np0005466030 kernel: tap6f16f975-11 (unregistering): left promiscuous mode
Oct  2 08:51:02 np0005466030 NetworkManager[44960]: <info>  [1759409462.1925] device (tap6f16f975-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:51:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:02Z|00555|binding|INFO|Releasing lport 6f16f975-1155-4931-9798-72b46e8ca37f from this chassis (sb_readonly=0)
Oct  2 08:51:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:02Z|00556|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f down in Southbound
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:02Z|00557|binding|INFO|Removing iface tap6f16f975-11 ovn-installed in OVS
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.217 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.219 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 unbound from our chassis#033[00m
Oct  2 08:51:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.221 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ea35968-5cdb-414e-9226-6ba534628944, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:51:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.223 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b58363d8-c15c-4933-9c0d-7471de2f25c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.223 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace which is not needed anymore#033[00m
Oct  2 08:51:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:02.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005466030 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct  2 08:51:02 np0005466030 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000088.scope: Consumed 14.900s CPU time.
Oct  2 08:51:02 np0005466030 systemd-machined[188247]: Machine qemu-65-instance-00000088 terminated.
Oct  2 08:51:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:02.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:02 np0005466030 kernel: tap6f16f975-11: entered promiscuous mode
Oct  2 08:51:02 np0005466030 kernel: tap6f16f975-11 (unregistering): left promiscuous mode
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.386 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance destroyed successfully.#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.392 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance destroyed successfully.#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.393 2 DEBUG nova.virt.libvirt.vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.394 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.394 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.395 2 DEBUG os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f16f975-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:02 np0005466030 nova_compute[230518]: 2025-10-02 12:51:02.401 2 INFO os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')#033[00m
Oct  2 08:51:02 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : haproxy version is 2.8.14-c23fe91
Oct  2 08:51:02 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : path to executable is /usr/sbin/haproxy
Oct  2 08:51:02 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [WARNING]  (283110) : Exiting Master process...
Oct  2 08:51:02 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [ALERT]    (283110) : Current worker (283112) exited with code 143 (Terminated)
Oct  2 08:51:02 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [WARNING]  (283110) : All workers exited. Exiting... (0)
Oct  2 08:51:02 np0005466030 systemd[1]: libpod-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc.scope: Deactivated successfully.
Oct  2 08:51:02 np0005466030 podman[283514]: 2025-10-02 12:51:02.509634471 +0000 UTC m=+0.190506447 container died 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:51:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:03 np0005466030 nova_compute[230518]: 2025-10-02 12:51:03.238 2 DEBUG nova.compute.manager [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:03 np0005466030 nova_compute[230518]: 2025-10-02 12:51:03.239 2 DEBUG oslo_concurrency.lockutils [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:03 np0005466030 nova_compute[230518]: 2025-10-02 12:51:03.239 2 DEBUG oslo_concurrency.lockutils [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:03 np0005466030 nova_compute[230518]: 2025-10-02 12:51:03.239 2 DEBUG oslo_concurrency.lockutils [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:03 np0005466030 nova_compute[230518]: 2025-10-02 12:51:03.240 2 DEBUG nova.compute.manager [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:03 np0005466030 nova_compute[230518]: 2025-10-02 12:51:03.240 2 WARNING nova.compute.manager [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 08:51:03 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc-userdata-shm.mount: Deactivated successfully.
Oct  2 08:51:03 np0005466030 systemd[1]: var-lib-containers-storage-overlay-d2275299b59a6f31b3f5c4c1bc1ae63291b4b9fabe72845925a94cee91153691-merged.mount: Deactivated successfully.
Oct  2 08:51:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:51:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:51:03 np0005466030 nova_compute[230518]: 2025-10-02 12:51:03.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:03 np0005466030 podman[283514]: 2025-10-02 12:51:03.973947231 +0000 UTC m=+1.654819207 container cleanup 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:51:03 np0005466030 systemd[1]: libpod-conmon-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc.scope: Deactivated successfully.
Oct  2 08:51:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:04.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:04 np0005466030 podman[283622]: 2025-10-02 12:51:04.308743264 +0000 UTC m=+0.309169711 container remove 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[389b942f-71bf-44f7-b3f0-025745638447]: (4, ('Thu Oct  2 12:51:02 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc)\n739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc\nThu Oct  2 12:51:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc)\n739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.320 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8213be-6c43-4527-a1fd-c133622c0d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.321 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:04 np0005466030 nova_compute[230518]: 2025-10-02 12:51:04.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:04 np0005466030 kernel: tap1ea35968-50: left promiscuous mode
Oct  2 08:51:04 np0005466030 nova_compute[230518]: 2025-10-02 12:51:04.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.346 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a08e1403-677e-4464-8f02-96e1b97f64ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.384 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3fd3ee-ab78-4a35-9421-eb0aca46fbd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48939f12-ee73-4b81-a17e-b86a251bf064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.401 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5bd1b0-ed29-4089-8fd4-2e817e269ac6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728195, 'reachable_time': 36493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283639, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:04 np0005466030 systemd[1]: run-netns-ovnmeta\x2d1ea35968\x2d5cdb\x2d414e\x2d9226\x2d6ba534628944.mount: Deactivated successfully.
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.407 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:51:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.408 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[430cc590-e4c6-4e9f-b5ee-1c3dc5948e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:51:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:51:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:51:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.302 2 DEBUG nova.compute.manager [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.303 2 DEBUG oslo_concurrency.lockutils [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.303 2 DEBUG oslo_concurrency.lockutils [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.303 2 DEBUG oslo_concurrency.lockutils [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.304 2 DEBUG nova.compute.manager [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.304 2 WARNING nova.compute.manager [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuilding.#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.698 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409450.6974986, 3b348c58-f179-41db-bd79-1fdea0ade389 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.699 2 INFO nova.compute.manager [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:51:05 np0005466030 nova_compute[230518]: 2025-10-02 12:51:05.725 2 DEBUG nova.compute.manager [None req-e8616022-c2b1-4a92-9546-fe4eddde2cfc - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.043 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.043 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.043 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.044 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.044 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.045 2 INFO nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Terminating instance#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.046 2 DEBUG nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:06.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:06 np0005466030 kernel: tapbf58273a-e5 (unregistering): left promiscuous mode
Oct  2 08:51:06 np0005466030 NetworkManager[44960]: <info>  [1759409466.4339] device (tapbf58273a-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:51:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:06Z|00558|binding|INFO|Releasing lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 from this chassis (sb_readonly=0)
Oct  2 08:51:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:06Z|00559|binding|INFO|Setting lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 down in Southbound
Oct  2 08:51:06 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:06Z|00560|binding|INFO|Removing iface tapbf58273a-e5 ovn-installed in OVS
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.463 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:04:35 10.100.0.9'], port_security=['fa:16:3e:41:04:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.464 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa unbound from our chassis#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.465 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa3b4df3-6044-4a53-8039-c9a5c05725aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.467 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[827d1f0e-d1f9-43b6-90b7-cf13055015ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.469 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa namespace which is not needed anymore#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:06 np0005466030 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct  2 08:51:06 np0005466030 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000082.scope: Consumed 21.971s CPU time.
Oct  2 08:51:06 np0005466030 systemd-machined[188247]: Machine qemu-63-instance-00000082 terminated.
Oct  2 08:51:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4101096271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.559 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:06 np0005466030 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : haproxy version is 2.8.14-c23fe91
Oct  2 08:51:06 np0005466030 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : path to executable is /usr/sbin/haproxy
Oct  2 08:51:06 np0005466030 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [WARNING]  (281595) : Exiting Master process...
Oct  2 08:51:06 np0005466030 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [ALERT]    (281595) : Current worker (281597) exited with code 143 (Terminated)
Oct  2 08:51:06 np0005466030 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [WARNING]  (281595) : All workers exited. Exiting... (0)
Oct  2 08:51:06 np0005466030 systemd[1]: libpod-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope: Deactivated successfully.
Oct  2 08:51:06 np0005466030 conmon[281591]: conmon b28b990e2fcd86cc0c6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope/container/memory.events
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.683 2 INFO nova.virt.libvirt.driver [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance destroyed successfully.#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.683 2 DEBUG nova.objects.instance [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'resources' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:06 np0005466030 podman[283683]: 2025-10-02 12:51:06.689697432 +0000 UTC m=+0.116799356 container died b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.701 2 DEBUG nova.virt.libvirt.vif [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1530906949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1530906949',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:58Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-9rd4q9z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:58Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=4b2aefbb-92cb-4a24-9ad2-884a12fa514c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.702 2 DEBUG nova.network.os_vif_util [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.702 2 DEBUG nova.network.os_vif_util [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.703 2 DEBUG os_vif [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf58273a-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.711 2 INFO os_vif [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5')#033[00m
Oct  2 08:51:06 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9-userdata-shm.mount: Deactivated successfully.
Oct  2 08:51:06 np0005466030 systemd[1]: var-lib-containers-storage-overlay-3c561c2b937b63684f4fa80d6a77b8b706c94b8f7ce5c4ada629e45417aa2f7f-merged.mount: Deactivated successfully.
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.756 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.756 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.760 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.760 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:51:06 np0005466030 podman[283683]: 2025-10-02 12:51:06.764723126 +0000 UTC m=+0.191825040 container cleanup b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:51:06 np0005466030 systemd[1]: libpod-conmon-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope: Deactivated successfully.
Oct  2 08:51:06 np0005466030 podman[283744]: 2025-10-02 12:51:06.87039379 +0000 UTC m=+0.083570183 container remove b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.877 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[794d1be2-2af2-407d-b12d-2eafbc50a6ea]: (4, ('Thu Oct  2 12:51:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa (b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9)\nb28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9\nThu Oct  2 12:51:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa (b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9)\nb28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.879 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7b368b73-bdac-4564-beb0-97a2bfe3da25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.880 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:06 np0005466030 kernel: tapaa3b4df3-60: left promiscuous mode
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.897 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8f52420c-9c36-4ed2-a03c-3eb0006905c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.929 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a19a2fca-9bb0-409f-98ac-b9756a2a7b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.930 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.931 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db28439d-7896-4760-832b-181974bd1516]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.931 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4375MB free_disk=20.880725860595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.931 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:06 np0005466030 nova_compute[230518]: 2025-10-02 12:51:06.932 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.946 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[581c39a6-6c62-460b-98c2-fc6ceda8fe6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714075, 'reachable_time': 25153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283760, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.948 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:51:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.948 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c38dbc3d-2e3d-4cc5-8be1-0633407ecf14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:06 np0005466030 systemd[1]: run-netns-ovnmeta\x2daa3b4df3\x2d6044\x2d4a53\x2d8039\x2dc9a5c05725aa.mount: Deactivated successfully.
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 33780b49-b5a1-4f3f-a6c5-a00011d53718 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.160 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.185 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.186 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.206 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.229 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.271 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting instance files /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.271 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deletion of /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del complete#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.288 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.554 2 WARNING nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During detach_volume, instance disappeared.: nova.exception.InstanceNotFound: Instance 33780b49-b5a1-4f3f-a6c5-a00011d53718 could not be found.#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.571 2 DEBUG nova.compute.manager [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-unplugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.572 2 DEBUG oslo_concurrency.lockutils [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.572 2 DEBUG oslo_concurrency.lockutils [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.573 2 DEBUG oslo_concurrency.lockutils [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.573 2 DEBUG nova.compute.manager [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] No waiting events found dispatching network-vif-unplugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.573 2 DEBUG nova.compute.manager [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-unplugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:51:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3298766319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.700 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.705 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.726 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.769 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:51:07 np0005466030 nova_compute[230518]: 2025-10-02 12:51:07.769 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:08.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.285 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Preparing to wait for external event volume-reimaged-a7bdd212-0d34-40b8-9af9-06388d215028 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.286 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.286 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.286 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.750 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.750 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.984 2 INFO nova.virt.libvirt.driver [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deleting instance files /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c_del#033[00m
Oct  2 08:51:08 np0005466030 nova_compute[230518]: 2025-10-02 12:51:08.984 2 INFO nova.virt.libvirt.driver [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deletion of /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c_del complete#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.039 2 INFO nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 2.99 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.039 2 DEBUG oslo.service.loopingcall [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.039 2 DEBUG nova.compute.manager [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.040 2 DEBUG nova.network.neutron [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.690 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.691 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.691 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.691 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.692 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] No waiting events found dispatching network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.692 2 WARNING nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received unexpected event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:51:09 np0005466030 nova_compute[230518]: 2025-10-02 12:51:09.991 2 DEBUG nova.network.neutron [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.021 2 INFO nova.compute.manager [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 0.98 seconds to deallocate network for instance.#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.083 2 DEBUG nova.compute.manager [req-c0f2c6e7-55bf-4856-8cde-5ffa8225d4d8 req-68d8fb20-de85-4a92-ac15-e99bc15eefb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-deleted-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.244 2 INFO nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.293 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.293 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:10.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.343 2 DEBUG oslo_concurrency.processutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:51:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3025908862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.808 2 DEBUG oslo_concurrency.processutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.817 2 DEBUG nova.compute.provider_tree [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.833 2 DEBUG nova.scheduler.client.report [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.856 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.887 2 INFO nova.scheduler.client.report [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Deleted allocations for instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c#033[00m
Oct  2 08:51:10 np0005466030 nova_compute[230518]: 2025-10-02 12:51:10.952 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:11 np0005466030 nova_compute[230518]: 2025-10-02 12:51:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:11 np0005466030 nova_compute[230518]: 2025-10-02 12:51:11.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:12 np0005466030 nova_compute[230518]: 2025-10-02 12:51:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:12.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:51:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:51:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:51:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:51:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:13 np0005466030 nova_compute[230518]: 2025-10-02 12:51:13.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:14 np0005466030 nova_compute[230518]: 2025-10-02 12:51:14.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:14.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.083 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:51:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:51:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:51:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:51:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.698 2 DEBUG nova.compute.manager [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event volume-reimaged-a7bdd212-0d34-40b8-9af9-06388d215028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.698 2 DEBUG oslo_concurrency.lockutils [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG oslo_concurrency.lockutils [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG oslo_concurrency.lockutils [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG nova.compute.manager [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Processing event volume-reimaged-a7bdd212-0d34-40b8-9af9-06388d215028 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance event wait completed in 6 seconds for volume-reimaged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.791 2 INFO nova.virt.block_device [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Booting with volume a7bdd212-0d34-40b8-9af9-06388d215028 at /dev/vda#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.956 2 DEBUG os_brick.utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.957 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.968 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.969 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9bae59-a7a7-429b-831c-90bb7137ad82]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.970 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.976 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.976 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ddfecc-d99f-46fc-80a1-7d9c16c5d1d3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.978 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.985 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.985 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[1da3156b-c4b3-438e-8446-92cd58fc9036]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.986 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[77a078ff-d752-4436-a145-0a93d57bdb57]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:15 np0005466030 nova_compute[230518]: 2025-10-02 12:51:15.987 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:16 np0005466030 nova_compute[230518]: 2025-10-02 12:51:16.016 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:16 np0005466030 nova_compute[230518]: 2025-10-02 12:51:16.018 2 DEBUG os_brick.initiator.connectors.lightos [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 08:51:16 np0005466030 nova_compute[230518]: 2025-10-02 12:51:16.018 2 DEBUG os_brick.initiator.connectors.lightos [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 08:51:16 np0005466030 nova_compute[230518]: 2025-10-02 12:51:16.018 2 DEBUG os_brick.initiator.connectors.lightos [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 08:51:16 np0005466030 nova_compute[230518]: 2025-10-02 12:51:16.019 2 DEBUG os_brick.utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 08:51:16 np0005466030 nova_compute[230518]: 2025-10-02 12:51:16.019 2 DEBUG nova.virt.block_device [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating existing volume attachment record: 1860ab7f-c635-4488-b2d3-849571b3b102 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 08:51:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:16 np0005466030 nova_compute[230518]: 2025-10-02 12:51:16.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.371 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.372 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating image(s)#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.372 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.372 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Ensure instance console log exists: /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.373 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.373 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.374 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.376 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start _get_guest_xml network_info=[{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'attached_at': '', 'detached_at': '', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'serial': 'a7bdd212-0d34-40b8-9af9-06388d215028'}, 'boot_index': 0, 'attachment_id': '1860ab7f-c635-4488-b2d3-849571b3b102', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.381 2 WARNING nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.385 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409462.3843558, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.386 2 INFO nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.390 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.391 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.393 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.394 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.394 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.395 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.395 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:17 np0005466030 nova_compute[230518]: 2025-10-02 12:51:17.400 2 DEBUG nova.compute.manager [None req-c0d72535-549c-48e6-977f-e156f998b85c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:18 np0005466030 nova_compute[230518]: 2025-10-02 12:51:18.256 2 DEBUG nova.storage.rbd_utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:18 np0005466030 nova_compute[230518]: 2025-10-02 12:51:18.263 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:51:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:51:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:51:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:51:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:51:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/100893316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:51:18 np0005466030 nova_compute[230518]: 2025-10-02 12:51:18.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.244 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.981s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.270 2 DEBUG nova.virt.libvirt.vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.271 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.272 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.274 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <uuid>33780b49-b5a1-4f3f-a6c5-a00011d53718</uuid>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <name>instance-00000088</name>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsV293TestJSON-server-1954698310</nova:name>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:51:17</nova:creationTime>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:user uuid="af2648eefb594bc49309cccf408f7ae1">tempest-ServerActionsV293TestJSON-365577023-project-member</nova:user>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:project uuid="1308a7eb298f49baaeaf3dc3a6acf592">tempest-ServerActionsV293TestJSON-365577023</nova:project>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <nova:port uuid="6f16f975-1155-4931-9798-72b46e8ca37f">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <entry name="serial">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <entry name="uuid">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <serial>a7bdd212-0d34-40b8-9af9-06388d215028</serial>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:9e:0c:7f"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <target dev="tap6f16f975-11"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log" append="off"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:51:19 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:51:19 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:51:19 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:51:19 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.275 2 DEBUG nova.virt.libvirt.vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.276 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.276 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.277 2 DEBUG os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f16f975-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f16f975-11, col_values=(('external_ids', {'iface-id': '6f16f975-1155-4931-9798-72b46e8ca37f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:0c:7f', 'vm-uuid': '33780b49-b5a1-4f3f-a6c5-a00011d53718'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:19 np0005466030 NetworkManager[44960]: <info>  [1759409479.2842] manager: (tap6f16f975-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.289 2 INFO os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.423 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.424 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.424 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No VIF found with MAC fa:16:3e:9e:0c:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.424 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Using config drive#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.449 2 DEBUG nova.storage.rbd_utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.474 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.503 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'keypairs' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.934 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating config drive at /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config#033[00m
Oct  2 08:51:19 np0005466030 nova_compute[230518]: 2025-10-02 12:51:19.939 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacqx0puf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:20 np0005466030 nova_compute[230518]: 2025-10-02 12:51:20.076 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacqx0puf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:20 np0005466030 nova_compute[230518]: 2025-10-02 12:51:20.501 2 DEBUG nova.storage.rbd_utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:51:20 np0005466030 nova_compute[230518]: 2025-10-02 12:51:20.507 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:51:21 np0005466030 nova_compute[230518]: 2025-10-02 12:51:21.681 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409466.6800296, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:21 np0005466030 nova_compute[230518]: 2025-10-02 12:51:21.682 2 INFO nova.compute.manager [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:51:21 np0005466030 nova_compute[230518]: 2025-10-02 12:51:21.703 2 DEBUG nova.compute.manager [None req-8d7a16f7-6b84-44f1-8c8b-1db0e20de2ed - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.292 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.785s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.292 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting local config drive /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config because it was imported into RBD.#033[00m
Oct  2 08:51:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:22.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:22 np0005466030 kernel: tap6f16f975-11: entered promiscuous mode
Oct  2 08:51:22 np0005466030 NetworkManager[44960]: <info>  [1759409482.3544] manager: (tap6f16f975-11): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Oct  2 08:51:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:22Z|00561|binding|INFO|Claiming lport 6f16f975-1155-4931-9798-72b46e8ca37f for this chassis.
Oct  2 08:51:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:22Z|00562|binding|INFO|6f16f975-1155-4931-9798-72b46e8ca37f: Claiming fa:16:3e:9e:0c:7f 10.100.0.13
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.362 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.363 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 bound to our chassis#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.366 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ea35968-5cdb-414e-9226-6ba534628944#033[00m
Oct  2 08:51:22 np0005466030 systemd-udevd[283951]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.378 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[06f16751-7390-4d20-8b76-3f005e66396b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.379 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ea35968-51 in ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.381 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ea35968-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.381 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc74ece2-e4a3-485d-93ff-b021c8af77fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.382 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9087b46c-15b7-42f5-9212-864548ddabe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.396 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d53c4566-eb0b-4110-b629-2898466f6beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:22Z|00563|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f ovn-installed in OVS
Oct  2 08:51:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:22Z|00564|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f up in Southbound
Oct  2 08:51:22 np0005466030 NetworkManager[44960]: <info>  [1759409482.3996] device (tap6f16f975-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:51:22 np0005466030 NetworkManager[44960]: <info>  [1759409482.4002] device (tap6f16f975-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 systemd-machined[188247]: New machine qemu-66-instance-00000088.
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 systemd[1]: Started Virtual Machine qemu-66-instance-00000088.
Oct  2 08:51:22 np0005466030 podman[283918]: 2025-10-02 12:51:22.423293578 +0000 UTC m=+0.092939557 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.422 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[018a8215-20ce-49dc-9486-dbe7a5370553]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.457 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7a00cd-16d1-4467-944a-cfc3605220ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 podman[283914]: 2025-10-02 12:51:22.461043362 +0000 UTC m=+0.130065732 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.463 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b026bf46-90e0-49d0-967a-f72d87b36e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 systemd-udevd[283967]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:51:22 np0005466030 NetworkManager[44960]: <info>  [1759409482.4660] manager: (tap1ea35968-50): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.496 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[645194f1-2202-4ec4-957e-32a90e01f966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.500 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[961fb1f4-e036-4b03-a5b1-88e2c00c571a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 NetworkManager[44960]: <info>  [1759409482.5233] device (tap1ea35968-50): carrier: link connected
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.531 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf91713-542c-427e-bf51-f36e4ee7cc57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.553 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7360662-6958-4d1e-8855-89cc14c880cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734608, 'reachable_time': 39730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284002, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.572 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b6059587-108c-4049-899f-8eb36a1dd24e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:b578'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734608, 'tstamp': 734608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284003, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.591 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f52689d-e62a-4480-bea6-1bffb4affa4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734608, 'reachable_time': 39730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284004, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.626 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[837ca5d2-e8ca-4c92-847d-02907cfdfedb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.662 2 DEBUG nova.compute.manager [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG oslo_concurrency.lockutils [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG oslo_concurrency.lockutils [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG oslo_concurrency.lockutils [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG nova.compute.manager [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.664 2 WARNING nova.compute.manager [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.686 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b8dcf584-ddd4-4e2b-a7c4-6de6f2dd713f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.687 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.687 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.688 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ea35968-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 NetworkManager[44960]: <info>  [1759409482.6917] manager: (tap1ea35968-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Oct  2 08:51:22 np0005466030 kernel: tap1ea35968-50: entered promiscuous mode
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.697 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ea35968-50, col_values=(('external_ids', {'iface-id': '656124c9-fbda-4e47-b94b-fbe1ed24070e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:22Z|00565|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.703 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.704 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[acd0fef8-4f67-4c46-94d5-ed41498ac1cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.705 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-1ea35968-5cdb-414e-9226-6ba534628944
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 1ea35968-5cdb-414e-9226-6ba534628944
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:51:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.706 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'env', 'PROCESS_TAG=haproxy-1ea35968-5cdb-414e-9226-6ba534628944', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ea35968-5cdb-414e-9226-6ba534628944.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:51:22 np0005466030 nova_compute[230518]: 2025-10-02 12:51:22.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Oct  2 08:51:23 np0005466030 podman[284052]: 2025-10-02 12:51:23.058432414 +0000 UTC m=+0.025501611 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:51:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:23 np0005466030 podman[284052]: 2025-10-02 12:51:23.437332301 +0000 UTC m=+0.404401478 container create 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:51:23 np0005466030 systemd[1]: Started libpod-conmon-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6.scope.
Oct  2 08:51:23 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:51:23 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0bf0e8b041f6f676207de558f8da163cf1ee93caa7b027803650b8a4521d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:51:23 np0005466030 podman[284052]: 2025-10-02 12:51:23.740032258 +0000 UTC m=+0.707101455 container init 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:51:23 np0005466030 podman[284052]: 2025-10-02 12:51:23.744990123 +0000 UTC m=+0.712059280 container start 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:51:23 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : New worker (284076) forked
Oct  2 08:51:23 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : Loading success.
Oct  2 08:51:23 np0005466030 nova_compute[230518]: 2025-10-02 12:51:23.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:24.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:24.295 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:51:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:24.297 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:51:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.996 2 DEBUG nova.compute.manager [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.997 2 DEBUG oslo_concurrency.lockutils [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.997 2 DEBUG oslo_concurrency.lockutils [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.997 2 DEBUG oslo_concurrency.lockutils [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.998 2 DEBUG nova.compute.manager [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:51:24 np0005466030 nova_compute[230518]: 2025-10-02 12:51:24.998 2 WARNING nova.compute.manager [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.125 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409485.1250057, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.126 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.128 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.129 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.132 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance spawned successfully.#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.132 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.148 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.155 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.159 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.159 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.160 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.160 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.161 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.162 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.199 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.199 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409485.125977, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.199 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Started (Lifecycle Event)#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.226 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.231 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.248 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.261 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.324 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.325 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.325 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  2 08:51:25 np0005466030 nova_compute[230518]: 2025-10-02 12:51:25.401 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:25.948 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:51:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:25.948 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:51:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:51:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:26.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:28.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:28 np0005466030 nova_compute[230518]: 2025-10-02 12:51:28.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:29 np0005466030 nova_compute[230518]: 2025-10-02 12:51:29.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:29 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:29Z|00566|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct  2 08:51:29 np0005466030 nova_compute[230518]: 2025-10-02 12:51:29.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:30.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:32.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Oct  2 08:51:32 np0005466030 podman[284110]: 2025-10-02 12:51:32.80331032 +0000 UTC m=+0.050176436 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  2 08:51:32 np0005466030 podman[284109]: 2025-10-02 12:51:32.803294779 +0000 UTC m=+0.050856296 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:51:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:33Z|00567|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct  2 08:51:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:51:33.299 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:51:33 np0005466030 nova_compute[230518]: 2025-10-02 12:51:33.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:33 np0005466030 nova_compute[230518]: 2025-10-02 12:51:33.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:34 np0005466030 nova_compute[230518]: 2025-10-02 12:51:34.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:34.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:36.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:36.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:38.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:38.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:38 np0005466030 nova_compute[230518]: 2025-10-02 12:51:38.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:39 np0005466030 nova_compute[230518]: 2025-10-02 12:51:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:40.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:40 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:40Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:0c:7f 10.100.0.13
Oct  2 08:51:40 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:40Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:0c:7f 10.100.0.13
Oct  2 08:51:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:40.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:41 np0005466030 nova_compute[230518]: 2025-10-02 12:51:41.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:51:41 np0005466030 nova_compute[230518]: 2025-10-02 12:51:41.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:51:41 np0005466030 nova_compute[230518]: 2025-10-02 12:51:41.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:51:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:42.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:43 np0005466030 nova_compute[230518]: 2025-10-02 12:51:43.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:44.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:44 np0005466030 nova_compute[230518]: 2025-10-02 12:51:44.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:44.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:46.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:46.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:48.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:48.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:48 np0005466030 nova_compute[230518]: 2025-10-02 12:51:48.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:49 np0005466030 nova_compute[230518]: 2025-10-02 12:51:49.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:50 np0005466030 nova_compute[230518]: 2025-10-02 12:51:50.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:51:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:50.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:51:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:52.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:52.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:51:52Z|00568|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct  2 08:51:52 np0005466030 nova_compute[230518]: 2025-10-02 12:51:52.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:52 np0005466030 podman[284149]: 2025-10-02 12:51:52.802185579 +0000 UTC m=+0.054321155 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:51:52 np0005466030 podman[284148]: 2025-10-02 12:51:52.857205915 +0000 UTC m=+0.112140198 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:51:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:53 np0005466030 nova_compute[230518]: 2025-10-02 12:51:53.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:54 np0005466030 nova_compute[230518]: 2025-10-02 12:51:54.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:54.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:54.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:54 np0005466030 nova_compute[230518]: 2025-10-02 12:51:54.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:56.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:51:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:58.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:51:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:51:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:58.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:51:58 np0005466030 nova_compute[230518]: 2025-10-02 12:51:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:51:59 np0005466030 nova_compute[230518]: 2025-10-02 12:51:59.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:00.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:00.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:00 np0005466030 nova_compute[230518]: 2025-10-02 12:52:00.623 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:00 np0005466030 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:00 np0005466030 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:00 np0005466030 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:00 np0005466030 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:00 np0005466030 nova_compute[230518]: 2025-10-02 12:52:00.626 2 INFO nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Terminating instance#033[00m
Oct  2 08:52:00 np0005466030 nova_compute[230518]: 2025-10-02 12:52:00.627 2 DEBUG nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:52:01 np0005466030 kernel: tap6f16f975-11 (unregistering): left promiscuous mode
Oct  2 08:52:01 np0005466030 NetworkManager[44960]: <info>  [1759409521.2204] device (tap6f16f975-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:01Z|00569|binding|INFO|Releasing lport 6f16f975-1155-4931-9798-72b46e8ca37f from this chassis (sb_readonly=0)
Oct  2 08:52:01 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:01Z|00570|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f down in Southbound
Oct  2 08:52:01 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:01Z|00571|binding|INFO|Removing iface tap6f16f975-11 ovn-installed in OVS
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.237 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.238 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 unbound from our chassis#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.239 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ea35968-5cdb-414e-9226-6ba534628944, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.240 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec9170c-4cca-4f9f-95c7-39d4673a8c14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.241 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace which is not needed anymore#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466030 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct  2 08:52:01 np0005466030 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000088.scope: Consumed 14.392s CPU time.
Oct  2 08:52:01 np0005466030 systemd-machined[188247]: Machine qemu-66-instance-00000088 terminated.
Oct  2 08:52:01 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : haproxy version is 2.8.14-c23fe91
Oct  2 08:52:01 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : path to executable is /usr/sbin/haproxy
Oct  2 08:52:01 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [WARNING]  (284072) : Exiting Master process...
Oct  2 08:52:01 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [ALERT]    (284072) : Current worker (284076) exited with code 143 (Terminated)
Oct  2 08:52:01 np0005466030 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [WARNING]  (284072) : All workers exited. Exiting... (0)
Oct  2 08:52:01 np0005466030 systemd[1]: libpod-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6.scope: Deactivated successfully.
Oct  2 08:52:01 np0005466030 podman[284217]: 2025-10-02 12:52:01.374126385 +0000 UTC m=+0.044241619 container died 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:52:01 np0005466030 systemd[1]: var-lib-containers-storage-overlay-78c0bf0e8b041f6f676207de558f8da163cf1ee93caa7b027803650b8a4521d5-merged.mount: Deactivated successfully.
Oct  2 08:52:01 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6-userdata-shm.mount: Deactivated successfully.
Oct  2 08:52:01 np0005466030 podman[284217]: 2025-10-02 12:52:01.410859367 +0000 UTC m=+0.080974571 container cleanup 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:52:01 np0005466030 systemd[1]: libpod-conmon-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6.scope: Deactivated successfully.
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.461 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance destroyed successfully.#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.462 2 DEBUG nova.objects.instance [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'resources' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:01 np0005466030 podman[284250]: 2025-10-02 12:52:01.471438118 +0000 UTC m=+0.041787963 container remove 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.477 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[17c1d41b-0fdb-49ae-81fd-ed19c291f1dd]: (4, ('Thu Oct  2 12:52:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6)\n5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6\nThu Oct  2 12:52:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6)\n5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.479 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43cec4f4-2a82-41b3-b1b0-7567ad1e6a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.480 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.482 2 DEBUG nova.virt.libvirt.vif [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.482 2 DEBUG nova.network.os_vif_util [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:52:01 np0005466030 kernel: tap1ea35968-50: left promiscuous mode
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.483 2 DEBUG nova.network.os_vif_util [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.484 2 DEBUG os_vif [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f16f975-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.498 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ba157a-2180-4eae-abe4-09679607a9dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.500 2 INFO os_vif [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.520 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7ede0d-1272-45a1-a1a8-45c202616315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.521 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af766b83-be47-4a23-9d51-b9b4681ff6e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.535 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04b2f0bb-102c-4f3f-be03-9c532c3a2b0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734600, 'reachable_time': 29634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284290, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 systemd[1]: run-netns-ovnmeta\x2d1ea35968\x2d5cdb\x2d414e\x2d9226\x2d6ba534628944.mount: Deactivated successfully.
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.540 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:52:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.540 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[50d2ed17-4f86-4bba-8176-06f84aebf64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.988 2 DEBUG nova.compute.manager [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.988 2 DEBUG oslo_concurrency.lockutils [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG oslo_concurrency.lockutils [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG oslo_concurrency.lockutils [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG nova.compute.manager [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:52:01 np0005466030 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG nova.compute.manager [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:52:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:02.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:02.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:02 np0005466030 nova_compute[230518]: 2025-10-02 12:52:02.402 2 INFO nova.virt.libvirt.driver [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting instance files /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del#033[00m
Oct  2 08:52:02 np0005466030 nova_compute[230518]: 2025-10-02 12:52:02.403 2 INFO nova.virt.libvirt.driver [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deletion of /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del complete#033[00m
Oct  2 08:52:02 np0005466030 nova_compute[230518]: 2025-10-02 12:52:02.477 2 INFO nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 1.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:52:02 np0005466030 nova_compute[230518]: 2025-10-02 12:52:02.477 2 DEBUG oslo.service.loopingcall [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:52:02 np0005466030 nova_compute[230518]: 2025-10-02 12:52:02.477 2 DEBUG nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:52:02 np0005466030 nova_compute[230518]: 2025-10-02 12:52:02.478 2 DEBUG nova.network.neutron [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:52:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:03.409 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:03.410 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:52:03 np0005466030 nova_compute[230518]: 2025-10-02 12:52:03.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:03.412 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:03 np0005466030 podman[284319]: 2025-10-02 12:52:03.509122075 +0000 UTC m=+0.062128840 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:52:03 np0005466030 podman[284320]: 2025-10-02 12:52:03.511068166 +0000 UTC m=+0.064419492 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:52:03 np0005466030 nova_compute[230518]: 2025-10-02 12:52:03.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:03 np0005466030 nova_compute[230518]: 2025-10-02 12:52:03.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:03 np0005466030 nova_compute[230518]: 2025-10-02 12:52:03.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.204 2 DEBUG nova.network.neutron [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.209 2 DEBUG nova.compute.manager [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.209 2 DEBUG oslo_concurrency.lockutils [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.210 2 DEBUG oslo_concurrency.lockutils [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.210 2 DEBUG oslo_concurrency.lockutils [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.210 2 DEBUG nova.compute.manager [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.211 2 WARNING nova.compute.manager [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:52:04 np0005466030 podman[284509]: 2025-10-02 12:52:04.225320094 +0000 UTC m=+0.148169049 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.239 2 INFO nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 1.76 seconds to deallocate network for instance.#033[00m
Oct  2 08:52:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:04.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.335 2 DEBUG nova.compute.manager [req-a4510d72-a7be-416e-b22a-eada7ded4f16 req-aa60cb57-ac4b-4482-ac04-338327e6ad1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-deleted-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:04 np0005466030 podman[284509]: 2025-10-02 12:52:04.370448838 +0000 UTC m=+0.293297813 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  2 08:52:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:04.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.626 2 INFO nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 0.39 seconds to detach 1 volumes for instance.#033[00m
Oct  2 08:52:04 np0005466030 nova_compute[230518]: 2025-10-02 12:52:04.627 2 DEBUG nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting volume: a7bdd212-0d34-40b8-9af9-06388d215028 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.097 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.098 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.169 2 DEBUG oslo_concurrency.processutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4094236586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.587 2 DEBUG oslo_concurrency.processutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.593 2 DEBUG nova.compute.provider_tree [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.629 2 DEBUG nova.scheduler.client.report [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.652 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.676 2 INFO nova.scheduler.client.report [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Deleted allocations for instance 33780b49-b5a1-4f3f-a6c5-a00011d53718#033[00m
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:52:05 np0005466030 nova_compute[230518]: 2025-10-02 12:52:05.785 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:06.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:06.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3880822609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.550 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.700 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.701 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4377MB free_disk=20.876129150390625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.759 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.760 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:52:06 np0005466030 nova_compute[230518]: 2025-10-02 12:52:06.869 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:52:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:52:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1114343258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:07 np0005466030 nova_compute[230518]: 2025-10-02 12:52:07.346 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:07 np0005466030 nova_compute[230518]: 2025-10-02 12:52:07.352 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:07 np0005466030 nova_compute[230518]: 2025-10-02 12:52:07.371 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:07 np0005466030 nova_compute[230518]: 2025-10-02 12:52:07.395 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:52:07 np0005466030 nova_compute[230518]: 2025-10-02 12:52:07.396 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.097661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528097743, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1348, "num_deletes": 256, "total_data_size": 2864792, "memory_usage": 2911704, "flush_reason": "Manual Compaction"}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528190921, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 1204616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55035, "largest_seqno": 56378, "table_properties": {"data_size": 1199873, "index_size": 2139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12972, "raw_average_key_size": 21, "raw_value_size": 1189398, "raw_average_value_size": 1962, "num_data_blocks": 94, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409429, "oldest_key_time": 1759409429, "file_creation_time": 1759409528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 93303 microseconds, and 4143 cpu microseconds.
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.190974) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 1204616 bytes OK
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.191034) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.197527) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.197557) EVENT_LOG_v1 {"time_micros": 1759409528197551, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.197577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2858304, prev total WAL file size 2858304, number of live WAL files 2.
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.198568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(1176KB)], [108(12MB)]
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528198640, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 14049541, "oldest_snapshot_seqno": -1}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:08 np0005466030 nova_compute[230518]: 2025-10-02 12:52:08.370 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:08 np0005466030 nova_compute[230518]: 2025-10-02 12:52:08.371 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:08.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8039 keys, 10810823 bytes, temperature: kUnknown
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528420064, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10810823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10758566, "index_size": 31025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20165, "raw_key_size": 208379, "raw_average_key_size": 25, "raw_value_size": 10616992, "raw_average_value_size": 1320, "num_data_blocks": 1215, "num_entries": 8039, "num_filter_entries": 8039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.420518) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10810823 bytes
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.446391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.4 rd, 48.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(20.6) write-amplify(9.0) OK, records in: 8526, records dropped: 487 output_compression: NoCompression
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.446432) EVENT_LOG_v1 {"time_micros": 1759409528446417, "job": 68, "event": "compaction_finished", "compaction_time_micros": 221514, "compaction_time_cpu_micros": 30965, "output_level": 6, "num_output_files": 1, "total_output_size": 10810823, "num_input_records": 8526, "num_output_records": 8039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528446793, "job": 68, "event": "table_file_deletion", "file_number": 110}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528448947, "job": 68, "event": "table_file_deletion", "file_number": 108}
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.198460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:52:08 np0005466030 nova_compute[230518]: 2025-10-02 12:52:08.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:10 np0005466030 nova_compute[230518]: 2025-10-02 12:52:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:10 np0005466030 nova_compute[230518]: 2025-10-02 12:52:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:52:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:10.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:52:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:52:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:52:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:52:11 np0005466030 nova_compute[230518]: 2025-10-02 12:52:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:11 np0005466030 nova_compute[230518]: 2025-10-02 12:52:11.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:12.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Oct  2 08:52:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:12.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:13 np0005466030 nova_compute[230518]: 2025-10-02 12:52:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:13 np0005466030 nova_compute[230518]: 2025-10-02 12:52:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Oct  2 08:52:13 np0005466030 nova_compute[230518]: 2025-10-02 12:52:13.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:14.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:14.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:52:16 np0005466030 nova_compute[230518]: 2025-10-02 12:52:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:16.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:16 np0005466030 nova_compute[230518]: 2025-10-02 12:52:16.460 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409521.4588168, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:52:16 np0005466030 nova_compute[230518]: 2025-10-02 12:52:16.460 2 INFO nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:52:16 np0005466030 nova_compute[230518]: 2025-10-02 12:52:16.480 2 DEBUG nova.compute.manager [None req-f6f12623-d4c3-4e48-aa39-40ff5b01ef90 - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:16 np0005466030 nova_compute[230518]: 2025-10-02 12:52:16.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:17 np0005466030 nova_compute[230518]: 2025-10-02 12:52:17.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:17 np0005466030 nova_compute[230518]: 2025-10-02 12:52:17.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:52:17 np0005466030 nova_compute[230518]: 2025-10-02 12:52:17.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:52:17 np0005466030 nova_compute[230518]: 2025-10-02 12:52:17.081 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:52:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:18.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:18 np0005466030 nova_compute[230518]: 2025-10-02 12:52:18.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:20.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:21 np0005466030 nova_compute[230518]: 2025-10-02 12:52:21.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:52:21 np0005466030 nova_compute[230518]: 2025-10-02 12:52:21.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:22.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Oct  2 08:52:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:23 np0005466030 podman[284884]: 2025-10-02 12:52:23.808551816 +0000 UTC m=+0.055289066 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:52:23 np0005466030 podman[284883]: 2025-10-02 12:52:23.865144191 +0000 UTC m=+0.106059369 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct  2 08:52:23 np0005466030 nova_compute[230518]: 2025-10-02 12:52:23.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:24.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.466 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.467 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.504 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.602 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.603 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.613 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.613 2 INFO nova.compute.claims [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:52:24 np0005466030 nova_compute[230518]: 2025-10-02 12:52:24.722 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1864420010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.181 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.187 2 DEBUG nova.compute.provider_tree [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.213 2 DEBUG nova.scheduler.client.report [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.243 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.244 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.293 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.319 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.348 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.378 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.379 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.411 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.485 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.486 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.487 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Creating image(s)#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.516 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.541 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.563 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.567 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.622 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.623 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.631 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.631 2 INFO nova.compute.claims [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.634 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.635 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.635 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.636 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.659 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.662 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:25 np0005466030 nova_compute[230518]: 2025-10-02 12:52:25.916 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:26.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3065473880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.760 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.766 2 DEBUG nova.compute.provider_tree [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.813 2 DEBUG nova.scheduler.client.report [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.853 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.854 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.931 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.931 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.951 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:52:26 np0005466030 nova_compute[230518]: 2025-10-02 12:52:26.979 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.102 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.103 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.103 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Creating image(s)#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.134 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.163 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.191 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.195 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.258 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.260 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.260 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:27 np0005466030 nova_compute[230518]: 2025-10-02 12:52:27.261 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:28 np0005466030 nova_compute[230518]: 2025-10-02 12:52:28.015 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:28 np0005466030 nova_compute[230518]: 2025-10-02 12:52:28.019 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 d70a747f-a75e-4341-89db-5953efdbbbd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:28 np0005466030 nova_compute[230518]: 2025-10-02 12:52:28.050 2 DEBUG nova.policy [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:52:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:28.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:28 np0005466030 nova_compute[230518]: 2025-10-02 12:52:28.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.166 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.226 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] resizing rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.596 2 DEBUG nova.objects.instance [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'migration_context' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.611 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.611 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Ensure instance console log exists: /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.612 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.612 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.612 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.613 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.618 2 WARNING nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.629 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.630 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.632 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.633 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.634 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.634 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.637 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:52:29 np0005466030 nova_compute[230518]: 2025-10-02 12:52:29.639 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:52:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4066300551' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.069 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.099 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.103 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.352 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully created port: 9e761925-3065-4b15-ab37-4ce18061fcf6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:52:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:52:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3447191113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.531 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.533 2 DEBUG nova.objects.instance [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.554 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <uuid>9668bd28-30e7-4dd0-87a8-6577135cc19b</uuid>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <name>instance-0000008e</name>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersAaction247Test-server-392071192</nova:name>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:52:29</nova:creationTime>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <nova:user uuid="c6b3ffaf413a4cc592b58bfbf3b40c2b">tempest-ServersAaction247Test-547354818-project-member</nova:user>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <nova:project uuid="324f52a964c64bfc9c214faab2ddda5b">tempest-ServersAaction247Test-547354818</nova:project>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <nova:ports/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <entry name="serial">9668bd28-30e7-4dd0-87a8-6577135cc19b</entry>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <entry name="uuid">9668bd28-30e7-4dd0-87a8-6577135cc19b</entry>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/9668bd28-30e7-4dd0-87a8-6577135cc19b_disk">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/console.log" append="off"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:52:30 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:52:30 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:52:30 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:52:30 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.648 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.649 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.649 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Using config drive#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.676 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.936 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Creating config drive at /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config#033[00m
Oct  2 08:52:30 np0005466030 nova_compute[230518]: 2025-10-02 12:52:30.947 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmpixkpr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.123 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmpixkpr" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.170 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.177 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.856 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully updated port: 9e761925-3065-4b15-ab37-4ce18061fcf6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.875 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.875 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:52:31 np0005466030 nova_compute[230518]: 2025-10-02 12:52:31.875 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:52:32 np0005466030 nova_compute[230518]: 2025-10-02 12:52:32.032 2 DEBUG nova.compute.manager [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:32 np0005466030 nova_compute[230518]: 2025-10-02 12:52:32.033 2 DEBUG nova.compute.manager [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:52:32 np0005466030 nova_compute[230518]: 2025-10-02 12:52:32.034 2 DEBUG oslo_concurrency.lockutils [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:52:32 np0005466030 nova_compute[230518]: 2025-10-02 12:52:32.078 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:52:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:32.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.392 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.418 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.419 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance network_info: |[{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.419 2 DEBUG oslo_concurrency.lockutils [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.420 2 DEBUG nova.network.neutron [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.557 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 d70a747f-a75e-4341-89db-5953efdbbbd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.666 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:52:33 np0005466030 podman[285403]: 2025-10-02 12:52:33.811842928 +0000 UTC m=+0.055802912 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:52:33 np0005466030 podman[285404]: 2025-10-02 12:52:33.828595243 +0000 UTC m=+0.056527104 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  2 08:52:33 np0005466030 nova_compute[230518]: 2025-10-02 12:52:33.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:34.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:34.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:35 np0005466030 nova_compute[230518]: 2025-10-02 12:52:35.233 2 DEBUG nova.network.neutron [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:52:35 np0005466030 nova_compute[230518]: 2025-10-02 12:52:35.234 2 DEBUG nova.network.neutron [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:35 np0005466030 nova_compute[230518]: 2025-10-02 12:52:35.255 2 DEBUG oslo_concurrency.lockutils [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:52:35 np0005466030 nova_compute[230518]: 2025-10-02 12:52:35.509 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:35 np0005466030 nova_compute[230518]: 2025-10-02 12:52:35.509 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deleting local config drive /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config because it was imported into RBD.#033[00m
Oct  2 08:52:35 np0005466030 systemd-machined[188247]: New machine qemu-67-instance-0000008e.
Oct  2 08:52:35 np0005466030 systemd[1]: Started Virtual Machine qemu-67-instance-0000008e.
Oct  2 08:52:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:36.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.445 2 DEBUG nova.objects.instance [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:36.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.461 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.461 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Ensure instance console log exists: /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.462 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.462 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.462 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.465 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start _get_guest_xml network_info=[{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.469 2 WARNING nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.478 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.480 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.484 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.485 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.487 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.487 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.488 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.489 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.489 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.489 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.490 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.490 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.490 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.491 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.491 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.491 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.494 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:52:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1082652408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.922 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.947 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:36 np0005466030 nova_compute[230518]: 2025-10-02 12:52:36.951 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.231 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409557.2308712, 9668bd28-30e7-4dd0-87a8-6577135cc19b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.232 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.239 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.239 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.243 2 INFO nova.virt.libvirt.driver [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance spawned successfully.#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.244 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.257 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.264 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.266 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.267 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.267 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.267 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.268 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.268 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.294 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.295 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409557.2311225, 9668bd28-30e7-4dd0-87a8-6577135cc19b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.295 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] VM Started (Lifecycle Event)#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.325 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.327 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.339 2 INFO nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 11.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.340 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.353 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:52:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:52:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1889640611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.399 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.401 2 DEBUG nova.virt.libvirt.vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:52:27Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.402 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.403 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.405 2 DEBUG nova.objects.instance [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.409 2 INFO nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 12.84 seconds to build instance.#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.430 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <name>instance-0000008f</name>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:52:36</nova:creationTime>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <entry name="serial">d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <entry name="uuid">d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:61:28:1d"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <target dev="tap9e761925-30"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log" append="off"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:52:37 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:52:37 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:52:37 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:52:37 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.433 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Preparing to wait for external event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.433 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.434 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.435 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.437 2 DEBUG nova.virt.libvirt.vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:52:27Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.437 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.439 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.440 2 DEBUG os_vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.445 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e761925-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e761925-30, col_values=(('external_ids', {'iface-id': '9e761925-3065-4b15-ab37-4ce18061fcf6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:28:1d', 'vm-uuid': 'd70a747f-a75e-4341-89db-5953efdbbbd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:37 np0005466030 NetworkManager[44960]: <info>  [1759409557.4541] manager: (tap9e761925-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.461 2 INFO os_vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30')#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.610 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.610 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.611 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:61:28:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.611 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Using config drive#033[00m
Oct  2 08:52:37 np0005466030 nova_compute[230518]: 2025-10-02 12:52:37.808 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:38.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:38.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:38 np0005466030 nova_compute[230518]: 2025-10-02 12:52:38.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.394 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Creating config drive at /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.398 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv88v742 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.542 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv88v742" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.565 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.569 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.832 2 DEBUG nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.871 2 INFO nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] instance snapshotting#033[00m
Oct  2 08:52:39 np0005466030 nova_compute[230518]: 2025-10-02 12:52:39.872 2 DEBUG nova.objects.instance [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'flavor' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.013 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.014 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.015 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "9668bd28-30e7-4dd0-87a8-6577135cc19b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.015 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.016 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.018 2 INFO nova.compute.manager [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Terminating instance#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.019 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "refresh_cache-9668bd28-30e7-4dd0-87a8-6577135cc19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.020 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquired lock "refresh_cache-9668bd28-30e7-4dd0-87a8-6577135cc19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.021 2 DEBUG nova.network.neutron [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.366 2 INFO nova.virt.libvirt.driver [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Beginning live snapshot process#033[00m
Oct  2 08:52:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.423 2 DEBUG nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Oct  2 08:52:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:40.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.641 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.642 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deleting local config drive /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config because it was imported into RBD.#033[00m
Oct  2 08:52:40 np0005466030 kernel: tap9e761925-30: entered promiscuous mode
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:40 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:40Z|00572|binding|INFO|Claiming lport 9e761925-3065-4b15-ab37-4ce18061fcf6 for this chassis.
Oct  2 08:52:40 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:40Z|00573|binding|INFO|9e761925-3065-4b15-ab37-4ce18061fcf6: Claiming fa:16:3e:61:28:1d 10.100.0.4
Oct  2 08:52:40 np0005466030 NetworkManager[44960]: <info>  [1759409560.7412] manager: (tap9e761925-30): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.762 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:28:1d 10.100.0.4'], port_security=['fa:16:3e:61:28:1d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '50493e8d-b9e4-415b-bc68-4eb501d460cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9eaefc8-91b5-45ac-8f60-f49bcfa08eb3, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9e761925-3065-4b15-ab37-4ce18061fcf6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.763 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9e761925-3065-4b15-ab37-4ce18061fcf6 in datapath a8923666-d594-4b3c-acca-d8d2652ab2bc bound to our chassis#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.764 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8923666-d594-4b3c-acca-d8d2652ab2bc#033[00m
Oct  2 08:52:40 np0005466030 systemd-machined[188247]: New machine qemu-68-instance-0000008f.
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.785 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8cf521-3d1d-432f-86c9-1c2e969b01ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.788 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8923666-d1 in ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.789 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8923666-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.789 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[96acc015-c65a-484c-a877-c90bec368c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 systemd[1]: Started Virtual Machine qemu-68-instance-0000008f.
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.790 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b866949c-c68d-40c4-a1b0-a7f85f5c1d19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 systemd-udevd[285659]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.809 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[df755030-2b4d-4c80-9701-5aea864e4820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 NetworkManager[44960]: <info>  [1759409560.8190] device (tap9e761925-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:52:40 np0005466030 NetworkManager[44960]: <info>  [1759409560.8201] device (tap9e761925-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:52:40 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:40Z|00574|binding|INFO|Setting lport 9e761925-3065-4b15-ab37-4ce18061fcf6 ovn-installed in OVS
Oct  2 08:52:40 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:40Z|00575|binding|INFO|Setting lport 9e761925-3065-4b15-ab37-4ce18061fcf6 up in Southbound
Oct  2 08:52:40 np0005466030 nova_compute[230518]: 2025-10-02 12:52:40.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a6587a8c-26e2-4f37-bb36-c22624e50987]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.869 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa4288c-c384-4ccf-81dd-243614edada3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 systemd-udevd[285662]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.879 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4346e85d-dd4c-443d-ae46-3479271da1ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 NetworkManager[44960]: <info>  [1759409560.8809] manager: (tapa8923666-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.915 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ce420a5e-1e0e-4286-99c9-be30faf47203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.918 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6d54beb9-f096-41af-b2a0-04336b167cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 NetworkManager[44960]: <info>  [1759409560.9454] device (tapa8923666-d0): carrier: link connected
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.955 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f8b04b-d203-4ed1-8987-59a25c2aba3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.978 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7d828a-52bd-42ca-b6e2-705fdd360610]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8923666-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742450, 'reachable_time': 19139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285690, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.001 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93ed1b26-0ddc-4750-9bbe-a092e83d5c6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:ee99'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742450, 'tstamp': 742450}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285691, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.026 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd189c1-aff3-4d31-9c26-8d99111294cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8923666-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742450, 'reachable_time': 19139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285692, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.055 2 DEBUG nova.network.neutron [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.073 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[09b52348-75d1-436a-9c8b-5f05f2c51dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.160 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e25b1ec6-ce66-4124-8115-7c5e188ef43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.161 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8923666-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.162 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.163 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8923666-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:41 np0005466030 NetworkManager[44960]: <info>  [1759409561.1657] manager: (tapa8923666-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct  2 08:52:41 np0005466030 kernel: tapa8923666-d0: entered promiscuous mode
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.168 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8923666-d0, col_values=(('external_ids', {'iface-id': '4b02cca2-258b-4a05-9628-3add3aef7360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:41 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:41Z|00576|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.172 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8923666-d594-4b3c-acca-d8d2652ab2bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8923666-d594-4b3c-acca-d8d2652ab2bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.180 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2814f436-4b0c-401b-b763-610d9ba06e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.181 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-a8923666-d594-4b3c-acca-d8d2652ab2bc
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/a8923666-d594-4b3c-acca-d8d2652ab2bc.pid.haproxy
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID a8923666-d594-4b3c-acca-d8d2652ab2bc
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:52:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.182 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'env', 'PROCESS_TAG=haproxy-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8923666-d594-4b3c-acca-d8d2652ab2bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.499 2 DEBUG nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.607 2 DEBUG nova.network.neutron [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.630 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Releasing lock "refresh_cache-9668bd28-30e7-4dd0-87a8-6577135cc19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.631 2 DEBUG nova.compute.manager [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:52:41 np0005466030 podman[285764]: 2025-10-02 12:52:41.589491105 +0000 UTC m=+0.039529541 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:52:41 np0005466030 podman[285764]: 2025-10-02 12:52:41.887942408 +0000 UTC m=+0.337980874 container create 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.924 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409561.923868, d70a747f-a75e-4341-89db-5953efdbbbd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.925 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Started (Lifecycle Event)#033[00m
Oct  2 08:52:41 np0005466030 systemd[1]: Started libpod-conmon-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b.scope.
Oct  2 08:52:41 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:52:41 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a37905e41d2d492f51b4caba938f776c5363e981c3b279006e2868ae414335e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.990 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.994 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409561.9239721, d70a747f-a75e-4341-89db-5953efdbbbd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:52:41 np0005466030 nova_compute[230518]: 2025-10-02 12:52:41.995 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:52:41 np0005466030 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct  2 08:52:41 np0005466030 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008e.scope: Consumed 5.496s CPU time.
Oct  2 08:52:42 np0005466030 systemd-machined[188247]: Machine qemu-67-instance-0000008e terminated.
Oct  2 08:52:42 np0005466030 podman[285764]: 2025-10-02 12:52:42.012183696 +0000 UTC m=+0.462222152 container init 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:52:42 np0005466030 podman[285764]: 2025-10-02 12:52:42.01807056 +0000 UTC m=+0.468108996 container start 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.026 2 DEBUG nova.compute.manager [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.026 2 DEBUG oslo_concurrency.lockutils [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.027 2 DEBUG oslo_concurrency.lockutils [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.027 2 DEBUG oslo_concurrency.lockutils [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.028 2 DEBUG nova.compute.manager [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Processing event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.029 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.033 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.037 2 INFO nova.virt.libvirt.driver [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance spawned successfully.#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.038 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:52:42 np0005466030 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : New worker (285785) forked
Oct  2 08:52:42 np0005466030 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : Loading success.
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.056 2 INFO nova.virt.libvirt.driver [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance destroyed successfully.#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.057 2 DEBUG nova.objects.instance [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'resources' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.093 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.135 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409562.032501, d70a747f-a75e-4341-89db-5953efdbbbd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.137 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.142 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.143 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.144 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.145 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.146 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.147 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.198 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.203 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.245 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.287 2 INFO nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 15.19 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.289 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.450 2 INFO nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 16.86 seconds to build instance.#033[00m
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:42.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:42 np0005466030 nova_compute[230518]: 2025-10-02 12:52:42.479 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:43 np0005466030 nova_compute[230518]: 2025-10-02 12:52:43.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:44.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:44.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:45 np0005466030 nova_compute[230518]: 2025-10-02 12:52:45.092 2 DEBUG nova.compute.manager [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:45 np0005466030 nova_compute[230518]: 2025-10-02 12:52:45.093 2 DEBUG oslo_concurrency.lockutils [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:45 np0005466030 nova_compute[230518]: 2025-10-02 12:52:45.094 2 DEBUG oslo_concurrency.lockutils [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:45 np0005466030 nova_compute[230518]: 2025-10-02 12:52:45.094 2 DEBUG oslo_concurrency.lockutils [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:45 np0005466030 nova_compute[230518]: 2025-10-02 12:52:45.095 2 DEBUG nova.compute.manager [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:52:45 np0005466030 nova_compute[230518]: 2025-10-02 12:52:45.095 2 WARNING nova.compute.manager [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:52:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:46.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:46.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:46 np0005466030 nova_compute[230518]: 2025-10-02 12:52:46.677 2 INFO nova.virt.libvirt.driver [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deleting instance files /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b_del#033[00m
Oct  2 08:52:46 np0005466030 nova_compute[230518]: 2025-10-02 12:52:46.678 2 INFO nova.virt.libvirt.driver [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deletion of /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b_del complete#033[00m
Oct  2 08:52:46 np0005466030 nova_compute[230518]: 2025-10-02 12:52:46.903 2 INFO nova.compute.manager [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 5.27 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:52:46 np0005466030 nova_compute[230518]: 2025-10-02 12:52:46.904 2 DEBUG oslo.service.loopingcall [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:52:46 np0005466030 nova_compute[230518]: 2025-10-02 12:52:46.904 2 DEBUG nova.compute.manager [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:52:46 np0005466030 nova_compute[230518]: 2025-10-02 12:52:46.905 2 DEBUG nova.network.neutron [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:52:47 np0005466030 nova_compute[230518]: 2025-10-02 12:52:47.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.122 2 DEBUG nova.network.neutron [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.140 2 DEBUG nova.network.neutron [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.200 2 INFO nova.compute.manager [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 1.30 seconds to deallocate network for instance.#033[00m
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.326 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.327 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:52:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:48.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.506 2 DEBUG oslo_concurrency.processutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:52:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:52:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2136771465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.951 2 DEBUG oslo_concurrency.processutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.959 2 DEBUG nova.compute.provider_tree [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:52:48 np0005466030 nova_compute[230518]: 2025-10-02 12:52:48.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:49 np0005466030 nova_compute[230518]: 2025-10-02 12:52:49.020 2 DEBUG nova.scheduler.client.report [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:52:49 np0005466030 nova_compute[230518]: 2025-10-02 12:52:49.040 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:49 np0005466030 nova_compute[230518]: 2025-10-02 12:52:49.115 2 INFO nova.scheduler.client.report [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Deleted allocations for instance 9668bd28-30e7-4dd0-87a8-6577135cc19b#033[00m
Oct  2 08:52:49 np0005466030 nova_compute[230518]: 2025-10-02 12:52:49.239 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:52:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:50.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:50.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:51 np0005466030 NetworkManager[44960]: <info>  [1759409571.7075] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Oct  2 08:52:51 np0005466030 NetworkManager[44960]: <info>  [1759409571.7084] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct  2 08:52:51 np0005466030 nova_compute[230518]: 2025-10-02 12:52:51.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:51 np0005466030 nova_compute[230518]: 2025-10-02 12:52:51.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:52:51Z|00577|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct  2 08:52:51 np0005466030 nova_compute[230518]: 2025-10-02 12:52:51.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:52.334 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:52:52 np0005466030 nova_compute[230518]: 2025-10-02 12:52:52.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:52.335 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:52:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:52 np0005466030 nova_compute[230518]: 2025-10-02 12:52:52.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:52.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Oct  2 08:52:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:53 np0005466030 nova_compute[230518]: 2025-10-02 12:52:53.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:54.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:54.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:54 np0005466030 podman[285839]: 2025-10-02 12:52:54.80935281 +0000 UTC m=+0.057176015 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:52:54 np0005466030 podman[285838]: 2025-10-02 12:52:54.839076083 +0000 UTC m=+0.087501057 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:52:55 np0005466030 nova_compute[230518]: 2025-10-02 12:52:55.723 2 DEBUG nova.compute.manager [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:52:55 np0005466030 nova_compute[230518]: 2025-10-02 12:52:55.723 2 DEBUG nova.compute.manager [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:52:55 np0005466030 nova_compute[230518]: 2025-10-02 12:52:55.723 2 DEBUG oslo_concurrency.lockutils [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:52:55 np0005466030 nova_compute[230518]: 2025-10-02 12:52:55.724 2 DEBUG oslo_concurrency.lockutils [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:52:55 np0005466030 nova_compute[230518]: 2025-10-02 12:52:55.724 2 DEBUG nova.network.neutron [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:52:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:56.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:57 np0005466030 nova_compute[230518]: 2025-10-02 12:52:57.247 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409562.0536482, 9668bd28-30e7-4dd0-87a8-6577135cc19b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:52:57 np0005466030 nova_compute[230518]: 2025-10-02 12:52:57.247 2 INFO nova.compute.manager [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:52:57 np0005466030 nova_compute[230518]: 2025-10-02 12:52:57.287 2 DEBUG nova.compute.manager [None req-7d4a9db2-11fe-42b0-b7ef-dc5a09b20e0d - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:52:57 np0005466030 nova_compute[230518]: 2025-10-02 12:52:57.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:52:58.339 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:52:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:52:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:52:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:58.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:52:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:52:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:52:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:58.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:52:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Oct  2 08:52:58 np0005466030 nova_compute[230518]: 2025-10-02 12:52:58.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:52:59 np0005466030 nova_compute[230518]: 2025-10-02 12:52:59.988 2 DEBUG nova.network.neutron [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:52:59 np0005466030 nova_compute[230518]: 2025-10-02 12:52:59.988 2 DEBUG nova.network.neutron [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:00 np0005466030 nova_compute[230518]: 2025-10-02 12:53:00.041 2 DEBUG oslo_concurrency.lockutils [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Oct  2 08:53:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:00.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:00.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:00Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:28:1d 10.100.0.4
Oct  2 08:53:00 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:00Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:28:1d 10.100.0.4
Oct  2 08:53:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:02.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:02 np0005466030 nova_compute[230518]: 2025-10-02 12:53:02.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:03 np0005466030 nova_compute[230518]: 2025-10-02 12:53:03.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:03 np0005466030 nova_compute[230518]: 2025-10-02 12:53:03.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:04.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:04.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:04 np0005466030 podman[285883]: 2025-10-02 12:53:04.831124229 +0000 UTC m=+0.070934707 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:53:04 np0005466030 podman[285882]: 2025-10-02 12:53:04.831136059 +0000 UTC m=+0.067334963 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct  2 08:53:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:06.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:06 np0005466030 nova_compute[230518]: 2025-10-02 12:53:06.512 2 INFO nova.compute.manager [None req-c05c6fdd-e355-49f8-9aff-c03ffcc51a7e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Get console output#033[00m
Oct  2 08:53:06 np0005466030 nova_compute[230518]: 2025-10-02 12:53:06.517 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:53:07 np0005466030 nova_compute[230518]: 2025-10-02 12:53:07.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.088 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:08.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2500340169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:08.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.506 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.602 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.603 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.766 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.767 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.80624771118164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.767 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.768 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.939 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance d70a747f-a75e-4341-89db-5953efdbbbd9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.939 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.939 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:53:08 np0005466030 nova_compute[230518]: 2025-10-02 12:53:08.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:09 np0005466030 nova_compute[230518]: 2025-10-02 12:53:09.504 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Oct  2 08:53:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/67641422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:09 np0005466030 nova_compute[230518]: 2025-10-02 12:53:09.940 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:09 np0005466030 nova_compute[230518]: 2025-10-02 12:53:09.945 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:09 np0005466030 nova_compute[230518]: 2025-10-02 12:53:09.965 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:09 np0005466030 nova_compute[230518]: 2025-10-02 12:53:09.993 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:53:09 np0005466030 nova_compute[230518]: 2025-10-02 12:53:09.994 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:10.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:10.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:10 np0005466030 nova_compute[230518]: 2025-10-02 12:53:10.995 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:10 np0005466030 nova_compute[230518]: 2025-10-02 12:53:10.995 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:10 np0005466030 nova_compute[230518]: 2025-10-02 12:53:10.996 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:53:12 np0005466030 nova_compute[230518]: 2025-10-02 12:53:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:12.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:12.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:12 np0005466030 nova_compute[230518]: 2025-10-02 12:53:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:13 np0005466030 nova_compute[230518]: 2025-10-02 12:53:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:13 np0005466030 nova_compute[230518]: 2025-10-02 12:53:13.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:14.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:15 np0005466030 nova_compute[230518]: 2025-10-02 12:53:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:53:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:53:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:53:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:16.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:16 np0005466030 nova_compute[230518]: 2025-10-02 12:53:16.433 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:16 np0005466030 nova_compute[230518]: 2025-10-02 12:53:16.433 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:16 np0005466030 nova_compute[230518]: 2025-10-02 12:53:16.434 2 DEBUG nova.objects.instance [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'flavor' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:16.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:17 np0005466030 nova_compute[230518]: 2025-10-02 12:53:17.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:18 np0005466030 nova_compute[230518]: 2025-10-02 12:53:18.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:18 np0005466030 nova_compute[230518]: 2025-10-02 12:53:18.219 2 DEBUG nova.objects.instance [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_requests' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:18 np0005466030 nova_compute[230518]: 2025-10-02 12:53:18.245 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:53:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.307 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.307 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.330 2 DEBUG nova.policy [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.675 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.676 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.676 2 INFO nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Unshelving#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.903 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_requests' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:19 np0005466030 nova_compute[230518]: 2025-10-02 12:53:19.924 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'numa_topology' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.030 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.030 2 INFO nova.compute.claims [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:53:20 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:20Z|00578|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.246 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.411 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully created port: f47045d5-0c1f-4e24-be1d-a8f054763926 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:53:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1354063422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.684 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.690 2 DEBUG nova.compute.provider_tree [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.716 2 DEBUG nova.scheduler.client.report [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:20 np0005466030 nova_compute[230518]: 2025-10-02 12:53:20.747 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:21 np0005466030 nova_compute[230518]: 2025-10-02 12:53:21.148 2 INFO nova.network.neutron [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating port d3265627-45dd-403c-990b-451562559afe with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:53:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Oct  2 08:53:21 np0005466030 nova_compute[230518]: 2025-10-02 12:53:21.689 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully updated port: f47045d5-0c1f-4e24-be1d-a8f054763926 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:53:21 np0005466030 nova_compute[230518]: 2025-10-02 12:53:21.705 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:21 np0005466030 nova_compute[230518]: 2025-10-02 12:53:21.940 2 DEBUG nova.compute.manager [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:21 np0005466030 nova_compute[230518]: 2025-10-02 12:53:21.941 2 DEBUG nova.compute.manager [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-f47045d5-0c1f-4e24-be1d-a8f054763926. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:53:21 np0005466030 nova_compute[230518]: 2025-10-02 12:53:21.941 2 DEBUG oslo_concurrency.lockutils [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.245 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.257 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.258 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.258 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.258 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:53:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.576 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.576 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:22.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.577 2 DEBUG nova.network.neutron [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:53:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.721 2 DEBUG nova.compute.manager [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-changed-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.722 2 DEBUG nova.compute.manager [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing instance network info cache due to event network-changed-d3265627-45dd-403c-990b-451562559afe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:53:22 np0005466030 nova_compute[230518]: 2025-10-02 12:53:22.723 2 DEBUG oslo_concurrency.lockutils [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:24 np0005466030 nova_compute[230518]: 2025-10-02 12:53:24.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:24.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:24.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.262 2 DEBUG nova.network.neutron [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.284 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.285 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.286 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating image(s)#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.308 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.311 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.312 2 DEBUG oslo_concurrency.lockutils [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.312 2 DEBUG nova.network.neutron [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing network info cache for port d3265627-45dd-403c-990b-451562559afe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.352 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.378 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.381 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "ddf24850ac6bde3d49ac6b6be0dd633aca2f36fd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.382 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "ddf24850ac6bde3d49ac6b6be0dd633aca2f36fd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.789 2 DEBUG nova.virt.libvirt.imagebackend [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/47596e8e-a667-4ff8-bd1f-3f35c36243ae/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/47596e8e-a667-4ff8-bd1f-3f35c36243ae/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 08:53:25 np0005466030 podman[286225]: 2025-10-02 12:53:25.877228055 +0000 UTC m=+0.118669333 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 08:53:25 np0005466030 podman[286224]: 2025-10-02 12:53:25.890810591 +0000 UTC m=+0.139034963 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.891 2 DEBUG nova.virt.libvirt.imagebackend [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/47596e8e-a667-4ff8-bd1f-3f35c36243ae/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 08:53:25 np0005466030 nova_compute[230518]: 2025-10-02 12:53:25.891 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] cloning images/47596e8e-a667-4ff8-bd1f-3f35c36243ae@snap to None/a1440a2f-0663-451f-bef5-bbece30acc40_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 08:53:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:25.950 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:25.950 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:25.951 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.243 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "ddf24850ac6bde3d49ac6b6be0dd633aca2f36fd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.333 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.376 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.377 2 DEBUG oslo_concurrency.lockutils [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.377 2 DEBUG nova.network.neutron [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port f47045d5-0c1f-4e24-be1d-a8f054763926 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.380 2 DEBUG nova.virt.libvirt.vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.381 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.381 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.382 2 DEBUG os_vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.389 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.394 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf47045d5-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.395 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf47045d5-0c, col_values=(('external_ids', {'iface-id': 'f47045d5-0c1f-4e24-be1d-a8f054763926', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:ea:d4', 'vm-uuid': 'd70a747f-a75e-4341-89db-5953efdbbbd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:26 np0005466030 NetworkManager[44960]: <info>  [1759409606.3981] manager: (tapf47045d5-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:53:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:26.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.451 2 INFO os_vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c')#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.453 2 DEBUG nova.virt.libvirt.vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.453 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.454 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.462 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] flattening vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.525 2 DEBUG nova.virt.libvirt.guest [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] attach device xml: <interface type="ethernet">
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <mac address="fa:16:3e:13:ea:d4"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <model type="virtio"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <mtu size="1442"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <target dev="tapf47045d5-0c"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]: </interface>
Oct  2 08:53:26 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 08:53:26 np0005466030 kernel: tapf47045d5-0c: entered promiscuous mode
Oct  2 08:53:26 np0005466030 NetworkManager[44960]: <info>  [1759409606.5424] manager: (tapf47045d5-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Oct  2 08:53:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:26Z|00579|binding|INFO|Claiming lport f47045d5-0c1f-4e24-be1d-a8f054763926 for this chassis.
Oct  2 08:53:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:26Z|00580|binding|INFO|f47045d5-0c1f-4e24-be1d-a8f054763926: Claiming fa:16:3e:13:ea:d4 10.100.0.25
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.557 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ea:d4 10.100.0.25'], port_security=['fa:16:3e:13:ea:d4 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2fdae84-979e-43d0-b38a-d190b914304f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f47045d5-0c1f-4e24-be1d-a8f054763926) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.558 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f47045d5-0c1f-4e24-be1d-a8f054763926 in datapath 8efeaa72-f872-4ae7-abf0-187d9b448a81 bound to our chassis#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.559 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8efeaa72-f872-4ae7-abf0-187d9b448a81#033[00m
Oct  2 08:53:26 np0005466030 systemd-udevd[286433]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.577 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[350dd922-df28-4e5d-a172-877806fff22e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.578 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8efeaa72-f1 in ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:53:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.581 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8efeaa72-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.581 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57b1d45c-d614-49af-901d-85a1c0acde28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.584 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c275d54b-064b-4915-86cc-af73a34abe7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 NetworkManager[44960]: <info>  [1759409606.5908] device (tapf47045d5-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:53:26 np0005466030 NetworkManager[44960]: <info>  [1759409606.5920] device (tapf47045d5-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.607 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9abef24a-c7b0-4c96-b005-214f4e395ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:26Z|00581|binding|INFO|Setting lport f47045d5-0c1f-4e24-be1d-a8f054763926 ovn-installed in OVS
Oct  2 08:53:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:26Z|00582|binding|INFO|Setting lport f47045d5-0c1f-4e24-be1d-a8f054763926 up in Southbound
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.628 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a4f43d-bd4e-4ddb-a4a3-f54bfbe3784e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.663 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[63162d6c-7789-4571-8288-fe96beccacd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.668 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc56a1d1-18f0-4a9c-a433-1068ab653443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 NetworkManager[44960]: <info>  [1759409606.6701] manager: (tap8efeaa72-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.694 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.696 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.696 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:61:28:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.696 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:13:ea:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.714 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe73e83-5878-46cb-b4c5-9b523b95a928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.718 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34b47925-efce-44e8-86d2-88bc5c901ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 NetworkManager[44960]: <info>  [1759409606.7515] device (tap8efeaa72-f0): carrier: link connected
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.754 2 DEBUG nova.virt.libvirt.guest [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <nova:creationTime>2025-10-02 12:53:26</nova:creationTime>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <nova:flavor name="m1.nano">
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:memory>128</nova:memory>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:disk>1</nova:disk>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:swap>0</nova:swap>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  </nova:flavor>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <nova:owner>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  </nova:owner>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  <nova:ports>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:53:26 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    <nova:port uuid="f47045d5-0c1f-4e24-be1d-a8f054763926">
Oct  2 08:53:26 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:26 np0005466030 nova_compute[230518]:  </nova:ports>
Oct  2 08:53:26 np0005466030 nova_compute[230518]: </nova:instance>
Oct  2 08:53:26 np0005466030 nova_compute[230518]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.761 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c11bf2c-949a-43fe-947e-3f22a39b29c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.788 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1172fd9f-aa32-4530-9bbd-4ae5fa3433a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8efeaa72-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:bb:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747030, 'reachable_time': 43432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286460, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.807 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2642041f-7748-4bf4-911b-3cf96249ec3c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:bb1d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747030, 'tstamp': 747030}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286461, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.812 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40f6615a-8728-4fb0-bee4-3f20b4925a82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8efeaa72-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:bb:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747030, 'reachable_time': 43432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286462, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.864 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[703449ce-3fa6-4de6-b32f-a04a44cc7d66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.927 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7e76522f-2407-4aac-91d2-aa54da98dfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.929 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8efeaa72-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.929 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.929 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8efeaa72-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.957 2 DEBUG nova.compute.manager [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.958 2 DEBUG oslo_concurrency.lockutils [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.958 2 DEBUG oslo_concurrency.lockutils [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.958 2 DEBUG oslo_concurrency.lockutils [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.959 2 DEBUG nova.compute.manager [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.959 2 WARNING nova.compute.manager [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 NetworkManager[44960]: <info>  [1759409606.9663] manager: (tap8efeaa72-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct  2 08:53:26 np0005466030 kernel: tap8efeaa72-f0: entered promiscuous mode
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.970 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8efeaa72-f0, col_values=(('external_ids', {'iface-id': '3e18aee7-a834-4412-85b0-a759d74fd965'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:26Z|00583|binding|INFO|Releasing lport 3e18aee7-a834-4412-85b0-a759d74fd965 from this chassis (sb_readonly=0)
Oct  2 08:53:26 np0005466030 nova_compute[230518]: 2025-10-02 12:53:26.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.996 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8efeaa72-f872-4ae7-abf0-187d9b448a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8efeaa72-f872-4ae7-abf0-187d9b448a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.997 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43c6a78a-d1b4-4078-bd57-2736d5018ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.998 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-8efeaa72-f872-4ae7-abf0-187d9b448a81
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/8efeaa72-f872-4ae7-abf0-187d9b448a81.pid.haproxy
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 8efeaa72-f872-4ae7-abf0-187d9b448a81
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:53:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.999 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'env', 'PROCESS_TAG=haproxy-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8efeaa72-f872-4ae7-abf0-187d9b448a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:53:27 np0005466030 podman[286494]: 2025-10-02 12:53:27.346984365 +0000 UTC m=+0.025093147 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:53:27 np0005466030 nova_compute[230518]: 2025-10-02 12:53:27.636 2 DEBUG nova.network.neutron [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updated VIF entry in instance network info cache for port d3265627-45dd-403c-990b-451562559afe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:53:27 np0005466030 nova_compute[230518]: 2025-10-02 12:53:27.637 2 DEBUG nova.network.neutron [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:27 np0005466030 nova_compute[230518]: 2025-10-02 12:53:27.658 2 DEBUG oslo_concurrency.lockutils [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:28 np0005466030 podman[286494]: 2025-10-02 12:53:28.115215017 +0000 UTC m=+0.793323719 container create fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.162 2 DEBUG nova.network.neutron [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port f47045d5-0c1f-4e24-be1d-a8f054763926. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.162 2 DEBUG nova.network.neutron [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.192 2 DEBUG oslo_concurrency.lockutils [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:28Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:ea:d4 10.100.0.25
Oct  2 08:53:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:28Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:ea:d4 10.100.0.25
Oct  2 08:53:28 np0005466030 systemd[1]: Started libpod-conmon-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280.scope.
Oct  2 08:53:28 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:53:28 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ba3fd6480f261d514725bedcd9e73ed5ac3464d3e9ec307380fb269a44db53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:53:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:28.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:28 np0005466030 podman[286494]: 2025-10-02 12:53:28.463929117 +0000 UTC m=+1.142037849 container init fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.466 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-f47045d5-0c1f-4e24-be1d-a8f054763926" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.468 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-f47045d5-0c1f-4e24-be1d-a8f054763926" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:28 np0005466030 podman[286494]: 2025-10-02 12:53:28.470139042 +0000 UTC m=+1.148247764 container start fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.489 2 DEBUG nova.objects.instance [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'flavor' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:28 np0005466030 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : New worker (286516) forked
Oct  2 08:53:28 np0005466030 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : Loading success.
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.509 2 DEBUG nova.virt.libvirt.vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.510 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.511 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.517 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.520 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.522 2 DEBUG nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Attempting to detach device tapf47045d5-0c from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.522 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <mac address="fa:16:3e:13:ea:d4"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <model type="virtio"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <mtu size="1442"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <target dev="tapf47045d5-0c"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: </interface>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:53:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.711 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.714 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <name>instance-0000008f</name>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:creationTime>2025-10-02 12:53:26</nova:creationTime>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:flavor name="m1.nano">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:memory>128</nova:memory>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:disk>1</nova:disk>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:swap>0</nova:swap>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:flavor>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:owner>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:owner>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:ports>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:port uuid="f47045d5-0c1f-4e24-be1d-a8f054763926">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:ports>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: </nova:instance>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <memory unit='KiB'>131072</memory>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <resource>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <partition>/machine</partition>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </resource>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <sysinfo type='smbios'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <boot dev='hd'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <smbios mode='sysinfo'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <vmcoreinfo state='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <feature policy='require' name='x2apic'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <feature policy='require' name='vme'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <clock offset='utc'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <timer name='hpet' present='no'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <on_reboot>restart</on_reboot>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <on_crash>destroy</on_crash>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <disk type='network' device='disk'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target dev='vda' bus='virtio'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='virtio-disk0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <disk type='network' device='cdrom'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target dev='sda' bus='sata'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <readonly/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='sata0-0-0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pcie.0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='1' port='0x10'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='2' port='0x11'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='3' port='0x12'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='4' port='0x13'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='5' port='0x14'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='6' port='0x15'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='7' port='0x16'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='8' port='0x17'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.8'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='9' port='0x18'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.9'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='10' port='0x19'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.10'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='11' port='0x1a'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.11'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='12' port='0x1b'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.12'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='13' port='0x1c'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.13'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='14' port='0x1d'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.14'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='15' port='0x1e'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.15'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='16' port='0x1f'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.16'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='17' port='0x20'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.17'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='18' port='0x21'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.18'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='19' port='0x22'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.19'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='20' port='0x23'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.20'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='21' port='0x24'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.21'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='22' port='0x25'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.22'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='23' port='0x26'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.23'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='24' port='0x27'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.24'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='25' port='0x28'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.25'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-pci-bridge'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.26'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='usb'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='sata' index='0'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='ide'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <interface type='ethernet'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <mac address='fa:16:3e:61:28:1d'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target dev='tap9e761925-30'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model type='virtio'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <mtu size='1442'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='net0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <interface type='ethernet'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <mac address='fa:16:3e:13:ea:d4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target dev='tapf47045d5-0c'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model type='virtio'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <mtu size='1442'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='net1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <serial type='pty'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target type='isa-serial' port='0'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <model name='isa-serial'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </target>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target type='serial' port='0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </console>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <input type='tablet' bus='usb'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='input0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <input type='mouse' bus='ps2'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='input1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <input type='keyboard' bus='ps2'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='input2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <listen type='address' address='::0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <audio id='1' type='none'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='video0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <watchdog model='itco' action='reset'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='watchdog0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </watchdog>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <memballoon model='virtio'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <stats period='10'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='balloon0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <rng model='virtio'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='rng0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <label>+107:+107</label>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.715 2 INFO nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully detached device tapf47045d5-0c from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the persistent domain config.#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.715 2 DEBUG nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] (1/8): Attempting to detach device tapf47045d5-0c with device alias net1 from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.715 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] detach device xml: <interface type="ethernet">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <mac address="fa:16:3e:13:ea:d4"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <model type="virtio"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <mtu size="1442"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <target dev="tapf47045d5-0c"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: </interface>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 08:53:28 np0005466030 kernel: tapf47045d5-0c (unregistering): left promiscuous mode
Oct  2 08:53:28 np0005466030 NetworkManager[44960]: <info>  [1759409608.8196] device (tapf47045d5-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:53:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:28Z|00584|binding|INFO|Releasing lport f47045d5-0c1f-4e24-be1d-a8f054763926 from this chassis (sb_readonly=0)
Oct  2 08:53:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:28Z|00585|binding|INFO|Setting lport f47045d5-0c1f-4e24-be1d-a8f054763926 down in Southbound
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:28 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:28Z|00586|binding|INFO|Removing iface tapf47045d5-0c ovn-installed in OVS
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.837 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409608.8365045, d70a747f-a75e-4341-89db-5953efdbbbd9 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.839 2 DEBUG nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Start waiting for the detach event from libvirt for device tapf47045d5-0c with device alias net1 for instance d70a747f-a75e-4341-89db-5953efdbbbd9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.840 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:53:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.842 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ea:d4 10.100.0.25'], port_security=['fa:16:3e:13:ea:d4 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2fdae84-979e-43d0-b38a-d190b914304f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f47045d5-0c1f-4e24-be1d-a8f054763926) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.844 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <name>instance-0000008f</name>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:creationTime>2025-10-02 12:53:26</nova:creationTime>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:flavor name="m1.nano">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:memory>128</nova:memory>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:disk>1</nova:disk>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:swap>0</nova:swap>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:flavor>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:owner>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:owner>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:ports>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:port uuid="f47045d5-0c1f-4e24-be1d-a8f054763926">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:ports>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: </nova:instance>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <memory unit='KiB'>131072</memory>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <resource>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <partition>/machine</partition>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </resource>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <sysinfo type='smbios'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <boot dev='hd'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <smbios mode='sysinfo'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <vmcoreinfo state='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <feature policy='require' name='x2apic'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <feature policy='require' name='vme'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <clock offset='utc'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <timer name='hpet' present='no'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <on_reboot>restart</on_reboot>
Oct  2 08:53:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.845 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f47045d5-0c1f-4e24-be1d-a8f054763926 in datapath 8efeaa72-f872-4ae7-abf0-187d9b448a81 unbound from our chassis#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <on_crash>destroy</on_crash>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <disk type='network' device='disk'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target dev='vda' bus='virtio'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='virtio-disk0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <disk type='network' device='cdrom'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target dev='sda' bus='sata'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <readonly/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='sata0-0-0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pcie.0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='1' port='0x10'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='2' port='0x11'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='3' port='0x12'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='4' port='0x13'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='5' port='0x14'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='6' port='0x15'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='7' port='0x16'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='8' port='0x17'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.8'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='9' port='0x18'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.9'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='10' port='0x19'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.10'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='11' port='0x1a'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.11'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='12' port='0x1b'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.12'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='13' port='0x1c'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.13'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='14' port='0x1d'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.14'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='15' port='0x1e'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.15'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='16' port='0x1f'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.16'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='17' port='0x20'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.17'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='18' port='0x21'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.18'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='19' port='0x22'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.19'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='20' port='0x23'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.20'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='21' port='0x24'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.21'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='22' port='0x25'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.22'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='23' port='0x26'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.23'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='24' port='0x27'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.24'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target chassis='25' port='0x28'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.25'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model name='pcie-pci-bridge'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='pci.26'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='usb'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <controller type='sata' index='0'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='ide'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <interface type='ethernet'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <mac address='fa:16:3e:61:28:1d'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target dev='tap9e761925-30'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model type='virtio'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <mtu size='1442'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='net0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <serial type='pty'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target type='isa-serial' port='0'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:        <model name='isa-serial'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      </target>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <target type='serial' port='0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </console>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <input type='tablet' bus='usb'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='input0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <input type='mouse' bus='ps2'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='input1'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <input type='keyboard' bus='ps2'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='input2'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <listen type='address' address='::0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <audio id='1' type='none'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='video0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <watchdog model='itco' action='reset'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='watchdog0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </watchdog>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <memballoon model='virtio'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <stats period='10'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='balloon0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <rng model='virtio'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <alias name='rng0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <label>+107:+107</label>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.844 2 INFO nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully detached device tapf47045d5-0c from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the live domain config.#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.845 2 DEBUG nova.virt.libvirt.vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.846 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.846 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.847 2 DEBUG os_vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:53:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.848 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8efeaa72-f872-4ae7-abf0-187d9b448a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.849 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4541d1df-1d89-4da8-881c-5fb3cd50aadd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf47045d5-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.850 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 namespace which is not needed anymore#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.857 2 INFO os_vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c')#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.858 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:creationTime>2025-10-02 12:53:28</nova:creationTime>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:flavor name="m1.nano">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:memory>128</nova:memory>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:disk>1</nova:disk>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:swap>0</nova:swap>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:flavor>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:owner>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:owner>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  <nova:ports>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:53:28 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:28 np0005466030 nova_compute[230518]:  </nova:ports>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: </nova:instance>
Oct  2 08:53:28 np0005466030 nova_compute[230518]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:53:28 np0005466030 nova_compute[230518]: 2025-10-02 12:53:28.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.101 2 DEBUG nova.compute.manager [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG oslo_concurrency.lockutils [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG oslo_concurrency.lockutils [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG oslo_concurrency.lockutils [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG nova.compute.manager [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.102 2 WARNING nova.compute.manager [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:53:29 np0005466030 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : haproxy version is 2.8.14-c23fe91
Oct  2 08:53:29 np0005466030 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : path to executable is /usr/sbin/haproxy
Oct  2 08:53:29 np0005466030 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [WARNING]  (286514) : Exiting Master process...
Oct  2 08:53:29 np0005466030 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [ALERT]    (286514) : Current worker (286516) exited with code 143 (Terminated)
Oct  2 08:53:29 np0005466030 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [WARNING]  (286514) : All workers exited. Exiting... (0)
Oct  2 08:53:29 np0005466030 systemd[1]: libpod-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280.scope: Deactivated successfully.
Oct  2 08:53:29 np0005466030 podman[286546]: 2025-10-02 12:53:29.282029724 +0000 UTC m=+0.330420358 container died fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:53:29 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280-userdata-shm.mount: Deactivated successfully.
Oct  2 08:53:29 np0005466030 systemd[1]: var-lib-containers-storage-overlay-04ba3fd6480f261d514725bedcd9e73ed5ac3464d3e9ec307380fb269a44db53-merged.mount: Deactivated successfully.
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.721 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Image rbd:vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.723 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.723 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Ensure instance console log exists: /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.724 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.724 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.725 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.729 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start _get_guest_xml network_info=[{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:52:45Z,direct_url=<?>,disk_format='raw',id=47596e8e-a667-4ff8-bd1f-3f35c36243ae,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1789493944-shelved',owner='dbd0afdfb05849f9abfe4cd4454f6a13',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:53:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.734 2 WARNING nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.740 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.741 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.745 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.746 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.748 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.748 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:52:45Z,direct_url=<?>,disk_format='raw',id=47596e8e-a667-4ff8-bd1f-3f35c36243ae,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1789493944-shelved',owner='dbd0afdfb05849f9abfe4cd4454f6a13',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:53:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.749 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.749 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.750 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.750 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.751 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.751 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.752 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.753 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.754 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.755 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.755 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:29 np0005466030 nova_compute[230518]: 2025-10-02 12:53:29.774 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:29 np0005466030 podman[286546]: 2025-10-02 12:53:29.884065042 +0000 UTC m=+0.932455666 container cleanup fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:29 np0005466030 systemd[1]: libpod-conmon-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280.scope: Deactivated successfully.
Oct  2 08:53:30 np0005466030 podman[286580]: 2025-10-02 12:53:30.041736288 +0000 UTC m=+0.123891138 container remove fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.047 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f154149c-f16e-4ab9-b610-bd870c81fe0d]: (4, ('Thu Oct  2 12:53:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 (fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280)\nfcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280\nThu Oct  2 12:53:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 (fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280)\nfcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.050 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a6ba88-1ac9-4fbb-aa01-8922a3a16bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.052 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8efeaa72-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:30 np0005466030 kernel: tap8efeaa72-f0: left promiscuous mode
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.095 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c34f6c5-2f53-42cf-a48d-8938f8fc8c08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.120 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a125d6f4-d6b9-46d0-bec2-26e26794a81b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[73dfad6c-84c5-4fbd-85e2-3108c92c2fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.144 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e31f84d-d615-4b3f-a77e-8672b9a8f36a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747021, 'reachable_time': 41194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286611, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:30 np0005466030 systemd[1]: run-netns-ovnmeta\x2d8efeaa72\x2df872\x2d4ae7\x2dabf0\x2d187d9b448a81.mount: Deactivated successfully.
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.148 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:53:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.148 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc3fe26-0a36-4653-819d-8d70f3236f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.153 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.154 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.155 2 DEBUG nova.network.neutron [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.200 2 DEBUG nova.compute.manager [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-deleted-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.200 2 INFO nova.compute.manager [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Neutron deleted interface f47045d5-0c1f-4e24-be1d-a8f054763926; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.201 2 DEBUG nova.network.neutron [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.225 2 DEBUG nova.objects.instance [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'system_metadata' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:53:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1327837543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.254 2 DEBUG nova.objects.instance [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'flavor' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.256 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.291 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.298 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.350 2 DEBUG nova.virt.libvirt.vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.351 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.352 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.357 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.362 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <name>instance-0000008f</name>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:creationTime>2025-10-02 12:53:28</nova:creationTime>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:flavor name="m1.nano">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:memory>128</nova:memory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:disk>1</nova:disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:swap>0</nova:swap>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:flavor>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: </nova:instance>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <memory unit='KiB'>131072</memory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <resource>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <partition>/machine</partition>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </resource>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <sysinfo type='smbios'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <boot dev='hd'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <smbios mode='sysinfo'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <vmcoreinfo state='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <feature policy='require' name='x2apic'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <feature policy='require' name='vme'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <clock offset='utc'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name='hpet' present='no'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <on_reboot>restart</on_reboot>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <on_crash>destroy</on_crash>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <disk type='network' device='disk'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev='vda' bus='virtio'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='virtio-disk0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <disk type='network' device='cdrom'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev='sda' bus='sata'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <readonly/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='sata0-0-0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pcie.0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='1' port='0x10'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='2' port='0x11'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='3' port='0x12'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='4' port='0x13'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='5' port='0x14'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='6' port='0x15'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='7' port='0x16'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='8' port='0x17'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.8'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='9' port='0x18'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.9'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='10' port='0x19'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.10'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='11' port='0x1a'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.11'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='12' port='0x1b'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.12'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='13' port='0x1c'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.13'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='14' port='0x1d'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.14'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='15' port='0x1e'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.15'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='16' port='0x1f'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.16'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='17' port='0x20'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.17'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='18' port='0x21'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.18'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='19' port='0x22'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.19'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='20' port='0x23'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.20'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='21' port='0x24'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.21'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='22' port='0x25'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.22'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='23' port='0x26'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.23'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='24' port='0x27'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.24'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='25' port='0x28'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.25'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-pci-bridge'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.26'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='usb'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='sata' index='0'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='ide'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <interface type='ethernet'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <mac address='fa:16:3e:61:28:1d'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev='tap9e761925-30'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model type='virtio'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <mtu size='1442'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='net0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <serial type='pty'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target type='isa-serial' port='0'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <model name='isa-serial'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </target>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target type='serial' port='0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </console>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type='tablet' bus='usb'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='input0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type='mouse' bus='ps2'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='input1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type='keyboard' bus='ps2'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='input2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <listen type='address' address='::0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <audio id='1' type='none'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='video0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <watchdog model='itco' action='reset'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='watchdog0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </watchdog>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <memballoon model='virtio'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <stats period='10'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='balloon0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <rng model='virtio'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='rng0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <label>+107:+107</label>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.363 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.371 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <name>instance-0000008f</name>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:creationTime>2025-10-02 12:53:28</nova:creationTime>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:flavor name="m1.nano">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:memory>128</nova:memory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:disk>1</nova:disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:swap>0</nova:swap>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:flavor>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: </nova:instance>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <memory unit='KiB'>131072</memory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <vcpu placement='static'>1</vcpu>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <resource>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <partition>/machine</partition>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </resource>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <sysinfo type='smbios'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='manufacturer'>RDO</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='product'>OpenStack Compute</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name='family'>Virtual Machine</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <boot dev='hd'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <smbios mode='sysinfo'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <vmcoreinfo state='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <cpu mode='custom' match='exact' check='full'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <model fallback='forbid'>Nehalem</model>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <feature policy='require' name='x2apic'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <feature policy='require' name='hypervisor'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <feature policy='require' name='vme'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <clock offset='utc'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name='pit' tickpolicy='delay'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name='hpet' present='no'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <on_poweroff>destroy</on_poweroff>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <on_reboot>restart</on_reboot>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <on_crash>destroy</on_crash>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <disk type='network' device='disk'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev='vda' bus='virtio'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='virtio-disk0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <disk type='network' device='cdrom'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver name='qemu' type='raw' cache='none'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <auth username='openstack'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.100' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.102' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name='192.168.122.101' port='6789'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev='sda' bus='sata'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <readonly/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='sata0-0-0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='0' model='pcie-root'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pcie.0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='1' port='0x10'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='2' port='0x11'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='3' port='0x12'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='4' port='0x13'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='5' port='0x14'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='6' port='0x15'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='7' port='0x16'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='8' port='0x17'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.8'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='9' port='0x18'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.9'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='10' port='0x19'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.10'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='11' port='0x1a'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.11'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='12' port='0x1b'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.12'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='13' port='0x1c'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.13'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='14' port='0x1d'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.14'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='15' port='0x1e'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.15'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='16' port='0x1f'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.16'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='17' port='0x20'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.17'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='18' port='0x21'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.18'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='19' port='0x22'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.19'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='20' port='0x23'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.20'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='21' port='0x24'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.21'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='22' port='0x25'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.22'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='23' port='0x26'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.23'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='24' port='0x27'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.24'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-root-port'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target chassis='25' port='0x28'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.25'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model name='pcie-pci-bridge'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='pci.26'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='usb'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type='sata' index='0'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='ide'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </controller>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <interface type='ethernet'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <mac address='fa:16:3e:61:28:1d'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev='tap9e761925-30'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model type='virtio'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver name='vhost' rx_queue_size='512'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <mtu size='1442'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='net0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <serial type='pty'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target type='isa-serial' port='0'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <model name='isa-serial'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </target>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <console type='pty' tty='/dev/pts/1'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source path='/dev/pts/1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target type='serial' port='0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='serial0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </console>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type='tablet' bus='usb'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='input0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='usb' bus='0' port='1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type='mouse' bus='ps2'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='input1'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type='keyboard' bus='ps2'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='input2'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </input>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <listen type='address' address='::0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </graphics>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <audio id='1' type='none'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model type='virtio' heads='1' primary='yes'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='video0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <watchdog model='itco' action='reset'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='watchdog0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </watchdog>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <memballoon model='virtio'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <stats period='10'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='balloon0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <rng model='virtio'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <backend model='random'>/dev/urandom</backend>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <alias name='rng0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <label>+107:+107</label>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <imagelabel>+107:+107</imagelabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </seclabel>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.372 2 WARNING nova.virt.libvirt.driver [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Detaching interface fa:16:3e:13:ea:d4 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapf47045d5-0c' not found.#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.372 2 DEBUG nova.virt.libvirt.vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.373 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.373 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.374 2 DEBUG os_vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf47045d5-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.378 2 INFO os_vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c')#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.379 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:creationTime>2025-10-02 12:53:30</nova:creationTime>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:flavor name="m1.nano">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:memory>128</nova:memory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:disk>1</nova:disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:swap>0</nova:swap>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:flavor>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </nova:port>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: </nova:instance>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  2 08:53:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:30.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:30.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:53:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/297649764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.873 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.877 2 DEBUG nova.virt.libvirt.vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='47596e8e-a667-4ff8-bd1f-3f35c36243ae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member',shelved_at='2025-10-02T12:53:03.363111',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='47596e8e-a667-4ff8-bd1f-3f35c36243ae'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.877 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.878 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.880 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.896 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <uuid>a1440a2f-0663-451f-bef5-bbece30acc40</uuid>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <name>instance-0000008a</name>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerActionsTestOtherB-server-1789493944</nova:name>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:53:29</nova:creationTime>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="47596e8e-a667-4ff8-bd1f-3f35c36243ae"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <nova:port uuid="d3265627-45dd-403c-990b-451562559afe">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name="serial">a1440a2f-0663-451f-bef5-bbece30acc40</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name="uuid">a1440a2f-0663-451f-bef5-bbece30acc40</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk.config">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:a5:ff:5d"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <target dev="tapd3265627-45"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/console.log" append="off"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:53:30 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:53:30 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:53:30 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.898 2 DEBUG nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Preparing to wait for external event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.899 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.899 2 DEBUG nova.virt.libvirt.vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='47596e8e-a667-4ff8-bd1f-3f35c36243ae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member',shelved_at='2025-10-02T12:53:03.363111',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='47596e8e-a667-4ff8-bd1f-3f35c36243ae'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.900 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.900 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.901 2 DEBUG os_vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3265627-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3265627-45, col_values=(('external_ids', {'iface-id': 'd3265627-45dd-403c-990b-451562559afe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:ff:5d', 'vm-uuid': 'a1440a2f-0663-451f-bef5-bbece30acc40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:30 np0005466030 NetworkManager[44960]: <info>  [1759409610.9081] manager: (tapd3265627-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.915 2 INFO os_vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45')#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.976 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.976 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.976 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:a5:ff:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:53:30 np0005466030 nova_compute[230518]: 2025-10-02 12:53:30.977 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Using config drive#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.007 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.028 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.081 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'keypairs' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.255 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-unplugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.255 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.255 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-unplugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.256 2 WARNING nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-unplugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.257 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.257 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.257 2 WARNING nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.616 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating config drive at /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.625 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c5l7aqu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.669 2 INFO nova.network.neutron [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Port f47045d5-0c1f-4e24-be1d-a8f054763926 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.670 2 DEBUG nova.network.neutron [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.698 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.732 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-f47045d5-0c1f-4e24-be1d-a8f054763926" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.779 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c5l7aqu" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.815 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:53:31 np0005466030 nova_compute[230518]: 2025-10-02 12:53:31.819 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config a1440a2f-0663-451f-bef5-bbece30acc40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:31Z|00587|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:32.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.594 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config a1440a2f-0663-451f-bef5-bbece30acc40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.594 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deleting local config drive /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config because it was imported into RBD.#033[00m
Oct  2 08:53:32 np0005466030 kernel: tapd3265627-45: entered promiscuous mode
Oct  2 08:53:32 np0005466030 NetworkManager[44960]: <info>  [1759409612.6570] manager: (tapd3265627-45): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Oct  2 08:53:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:32Z|00588|binding|INFO|Claiming lport d3265627-45dd-403c-990b-451562559afe for this chassis.
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:32Z|00589|binding|INFO|d3265627-45dd-403c-990b-451562559afe: Claiming fa:16:3e:a5:ff:5d 10.100.0.6
Oct  2 08:53:32 np0005466030 systemd-udevd[286612]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.667 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ff:5d 10.100.0.6'], port_security=['fa:16:3e:a5:ff:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a1440a2f-0663-451f-bef5-bbece30acc40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '7', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=d3265627-45dd-403c-990b-451562559afe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.668 138374 INFO neutron.agent.ovn.metadata.agent [-] Port d3265627-45dd-403c-990b-451562559afe in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.671 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4#033[00m
Oct  2 08:53:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:32Z|00590|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe ovn-installed in OVS
Oct  2 08:53:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:32Z|00591|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe up in Southbound
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466030 NetworkManager[44960]: <info>  [1759409612.6775] device (tapd3265627-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466030 NetworkManager[44960]: <info>  [1759409612.6803] device (tapd3265627-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.684 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[946cd7fb-8e0d-4cb6-b405-284533bf3414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.685 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9266ebd7-31 in ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:53:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.688 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9266ebd7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.689 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12d7946d-ab9d-4979-b95f-cb69185b1eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.690 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a5c5b6-c0ce-4ce0-91cc-81ae887704bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 systemd-machined[188247]: New machine qemu-69-instance-0000008a.
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.707 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fbcc2d-a56a-4dc7-b1f4-b504b96615e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 systemd[1]: Started Virtual Machine qemu-69-instance-0000008a.
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.739 2 DEBUG nova.compute.manager [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.739 2 DEBUG nova.compute.manager [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.739 2 DEBUG oslo_concurrency.lockutils [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.740 2 DEBUG oslo_concurrency.lockutils [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.739 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[02b93739-b260-4ed0-93bd-97696a8865c1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.740 2 DEBUG nova.network.neutron [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.772 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[74420d0c-9bf2-41b7-98a0-3614e0550cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9400e1d-28c6-4693-9614-d4dc7761f376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 NetworkManager[44960]: <info>  [1759409612.7813] manager: (tap9266ebd7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.788 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.789 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.789 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.789 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.790 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.791 2 INFO nova.compute.manager [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Terminating instance#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.793 2 DEBUG nova.compute.manager [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.817 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c8985f3c-00c1-46be-af61-62501ac47c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.820 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1757430c-03d2-4585-94b3-7850031ade56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 NetworkManager[44960]: <info>  [1759409612.8425] device (tap9266ebd7-30): carrier: link connected
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.847 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[583f6bf8-a4b7-4f9b-954e-55d2cb85442f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.865 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6da26a3e-62fd-473b-b301-f2a39e952c12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747640, 'reachable_time': 43723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286761, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1e4355-8ee5-4988-80af-cbcd0b9e2183]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:6593'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747640, 'tstamp': 747640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286762, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.896 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fed12466-b789-4833-8b98-15c845376dce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747640, 'reachable_time': 43723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286763, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.926 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4b585b-b38f-4652-b4ad-aa25e3554289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.985 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf7ffbf-e5d5-41c7-84bd-43d26de59053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.987 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.987 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.987 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:32 np0005466030 kernel: tap9266ebd7-30: entered promiscuous mode
Oct  2 08:53:32 np0005466030 NetworkManager[44960]: <info>  [1759409612.9898] manager: (tap9266ebd7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.992 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:32 np0005466030 nova_compute[230518]: 2025-10-02 12:53:32.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:32 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:32Z|00592|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.009 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.010 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[67c1e37c-6b7b-4765-8a4d-3a54586af2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.010 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.011 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'env', 'PROCESS_TAG=haproxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:53:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:33 np0005466030 kernel: tap9e761925-30 (unregistering): left promiscuous mode
Oct  2 08:53:33 np0005466030 NetworkManager[44960]: <info>  [1759409613.4076] device (tap9e761925-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:53:33 np0005466030 podman[286838]: 2025-10-02 12:53:33.40966485 +0000 UTC m=+0.052430125 container create 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:53:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:33Z|00593|binding|INFO|Releasing lport 9e761925-3065-4b15-ab37-4ce18061fcf6 from this chassis (sb_readonly=0)
Oct  2 08:53:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:33Z|00594|binding|INFO|Setting lport 9e761925-3065-4b15-ab37-4ce18061fcf6 down in Southbound
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:33Z|00595|binding|INFO|Removing iface tap9e761925-30 ovn-installed in OVS
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.446 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:28:1d 10.100.0.4'], port_security=['fa:16:3e:61:28:1d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '50493e8d-b9e4-415b-bc68-4eb501d460cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9eaefc8-91b5-45ac-8f60-f49bcfa08eb3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9e761925-3065-4b15-ab37-4ce18061fcf6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 systemd[1]: Started libpod-conmon-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371.scope.
Oct  2 08:53:33 np0005466030 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct  2 08:53:33 np0005466030 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008f.scope: Consumed 15.352s CPU time.
Oct  2 08:53:33 np0005466030 podman[286838]: 2025-10-02 12:53:33.381809257 +0000 UTC m=+0.024574562 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:53:33 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:53:33 np0005466030 systemd-machined[188247]: Machine qemu-68-instance-0000008f terminated.
Oct  2 08:53:33 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6218758efbae1e2167da1e023b3af2d79632b28cc528df2bdcb085fa84c90af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:53:33 np0005466030 podman[286838]: 2025-10-02 12:53:33.499237621 +0000 UTC m=+0.142002916 container init 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:53:33 np0005466030 podman[286838]: 2025-10-02 12:53:33.506672144 +0000 UTC m=+0.149437419 container start 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 08:53:33 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : New worker (286863) forked
Oct  2 08:53:33 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : Loading success.
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.565 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9e761925-3065-4b15-ab37-4ce18061fcf6 in datapath a8923666-d594-4b3c-acca-d8d2652ab2bc unbound from our chassis#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.566 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8923666-d594-4b3c-acca-d8d2652ab2bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.567 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e804a156-fc26-4da0-b17a-539586d7c8a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.568 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc namespace which is not needed anymore#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.638 2 INFO nova.virt.libvirt.driver [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance destroyed successfully.#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.639 2 DEBUG nova.objects.instance [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.654 2 DEBUG nova.virt.libvirt.vif [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.655 2 DEBUG nova.network.os_vif_util [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.656 2 DEBUG nova.network.os_vif_util [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.657 2 DEBUG os_vif [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.661 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e761925-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.667 2 INFO os_vif [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30')#033[00m
Oct  2 08:53:33 np0005466030 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : haproxy version is 2.8.14-c23fe91
Oct  2 08:53:33 np0005466030 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : path to executable is /usr/sbin/haproxy
Oct  2 08:53:33 np0005466030 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [WARNING]  (285783) : Exiting Master process...
Oct  2 08:53:33 np0005466030 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [ALERT]    (285783) : Current worker (285785) exited with code 143 (Terminated)
Oct  2 08:53:33 np0005466030 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [WARNING]  (285783) : All workers exited. Exiting... (0)
Oct  2 08:53:33 np0005466030 systemd[1]: libpod-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b.scope: Deactivated successfully.
Oct  2 08:53:33 np0005466030 podman[286910]: 2025-10-02 12:53:33.746651763 +0000 UTC m=+0.051105364 container died 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:53:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b-userdata-shm.mount: Deactivated successfully.
Oct  2 08:53:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay-1a37905e41d2d492f51b4caba938f776c5363e981c3b279006e2868ae414335e-merged.mount: Deactivated successfully.
Oct  2 08:53:33 np0005466030 podman[286910]: 2025-10-02 12:53:33.78736599 +0000 UTC m=+0.091819631 container cleanup 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:53:33 np0005466030 systemd[1]: libpod-conmon-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b.scope: Deactivated successfully.
Oct  2 08:53:33 np0005466030 podman[286949]: 2025-10-02 12:53:33.853385742 +0000 UTC m=+0.043279299 container remove 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.861 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b147ff7c-08f8-4c58-a862-d6d8181a773b]: (4, ('Thu Oct  2 12:53:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc (13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b)\n13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b\nThu Oct  2 12:53:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc (13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b)\n13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.862 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[428ad20b-33f3-4dc6-a0c8-e96feb65c84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.863 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8923666-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 kernel: tapa8923666-d0: left promiscuous mode
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.879 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409613.879577, a1440a2f-0663-451f-bef5-bbece30acc40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.880 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Started (Lifecycle Event)#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.884 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[954ac901-4dd4-4b28-b263-ac62d3f7b119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.900 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.905 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409613.8796678, a1440a2f-0663-451f-bef5-bbece30acc40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.906 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.917 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2b1624-0c89-47be-8220-71b3901720ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.919 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8f3ded-d754-4601-95e1-531451f91646]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.923 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.930 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.936 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3be85442-5522-4bd0-a7f7-808cc73e29c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742442, 'reachable_time': 31332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286964, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 systemd[1]: run-netns-ovnmeta\x2da8923666\x2dd594\x2d4b3c\x2dacca\x2dd8d2652ab2bc.mount: Deactivated successfully.
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.939 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:53:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.939 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c2335e-401b-4cd6-8750-86075acc8d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:33 np0005466030 nova_compute[230518]: 2025-10-02 12:53:33.948 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:53:34 np0005466030 nova_compute[230518]: 2025-10-02 12:53:34.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:34.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:34 np0005466030 nova_compute[230518]: 2025-10-02 12:53:34.622 2 DEBUG nova.network.neutron [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:53:34 np0005466030 nova_compute[230518]: 2025-10-02 12:53:34.623 2 DEBUG nova.network.neutron [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:34 np0005466030 nova_compute[230518]: 2025-10-02 12:53:34.647 2 DEBUG oslo_concurrency.lockutils [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:53:35 np0005466030 podman[286966]: 2025-10-02 12:53:35.805979281 +0000 UTC m=+0.060783978 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:53:35 np0005466030 podman[286967]: 2025-10-02 12:53:35.8310978 +0000 UTC m=+0.086094333 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 08:53:36 np0005466030 nova_compute[230518]: 2025-10-02 12:53:36.225 2 INFO nova.virt.libvirt.driver [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deleting instance files /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9_del#033[00m
Oct  2 08:53:36 np0005466030 nova_compute[230518]: 2025-10-02 12:53:36.227 2 INFO nova.virt.libvirt.driver [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deletion of /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9_del complete#033[00m
Oct  2 08:53:36 np0005466030 nova_compute[230518]: 2025-10-02 12:53:36.329 2 INFO nova.compute.manager [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 3.54 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:53:36 np0005466030 nova_compute[230518]: 2025-10-02 12:53:36.329 2 DEBUG oslo.service.loopingcall [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:53:36 np0005466030 nova_compute[230518]: 2025-10-02 12:53:36.330 2 DEBUG nova.compute.manager [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:53:36 np0005466030 nova_compute[230518]: 2025-10-02 12:53:36.330 2 DEBUG nova.network.neutron [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:53:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:38.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.859 2 DEBUG nova.compute.manager [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.860 2 DEBUG oslo_concurrency.lockutils [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.860 2 DEBUG oslo_concurrency.lockutils [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.860 2 DEBUG oslo_concurrency.lockutils [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.861 2 DEBUG nova.compute.manager [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Processing event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.861 2 DEBUG nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.865 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409618.8654199, a1440a2f-0663-451f-bef5-bbece30acc40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.866 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.868 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.871 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance spawned successfully.#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.889 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.892 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:53:38 np0005466030 nova_compute[230518]: 2025-10-02 12:53:38.943 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.063 2 DEBUG nova.network.neutron [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.081 2 INFO nova.compute.manager [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 2.75 seconds to deallocate network for instance.#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.124 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.125 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.219 2 DEBUG oslo_concurrency.processutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2661338451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.697 2 DEBUG oslo_concurrency.processutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.706 2 DEBUG nova.compute.provider_tree [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.726 2 DEBUG nova.scheduler.client.report [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.766 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.837 2 INFO nova.scheduler.client.report [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance d70a747f-a75e-4341-89db-5953efdbbbd9#033[00m
Oct  2 08:53:39 np0005466030 nova_compute[230518]: 2025-10-02 12:53:39.965 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Oct  2 08:53:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:40.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:40.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:40.640 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:40.641 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:53:40 np0005466030 nova_compute[230518]: 2025-10-02 12:53:40.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:40 np0005466030 nova_compute[230518]: 2025-10-02 12:53:40.813 2 DEBUG nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:40 np0005466030 nova_compute[230518]: 2025-10-02 12:53:40.976 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 21.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:41 np0005466030 nova_compute[230518]: 2025-10-02 12:53:41.017 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-deleted-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:41 np0005466030 nova_compute[230518]: 2025-10-02 12:53:41.017 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:41 np0005466030 nova_compute[230518]: 2025-10-02 12:53:41.018 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:41 np0005466030 nova_compute[230518]: 2025-10-02 12:53:41.018 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:41 np0005466030 nova_compute[230518]: 2025-10-02 12:53:41.018 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:41 np0005466030 nova_compute[230518]: 2025-10-02 12:53:41.019 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-plugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:41 np0005466030 nova_compute[230518]: 2025-10-02 12:53:41.019 2 WARNING nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received unexpected event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe for instance with vm_state active and task_state None.#033[00m
Oct  2 08:53:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:42.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:42.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:43 np0005466030 nova_compute[230518]: 2025-10-02 12:53:43.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:44 np0005466030 nova_compute[230518]: 2025-10-02 12:53:44.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Oct  2 08:53:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:44.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:44Z|00596|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:53:44 np0005466030 nova_compute[230518]: 2025-10-02 12:53:44.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:45Z|00597|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct  2 08:53:45 np0005466030 nova_compute[230518]: 2025-10-02 12:53:45.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.573593) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625573656, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1369, "num_deletes": 255, "total_data_size": 2909700, "memory_usage": 2962624, "flush_reason": "Manual Compaction"}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625590516, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 1908022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56383, "largest_seqno": 57747, "table_properties": {"data_size": 1901941, "index_size": 3348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13366, "raw_average_key_size": 20, "raw_value_size": 1889629, "raw_average_value_size": 2902, "num_data_blocks": 147, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409528, "oldest_key_time": 1759409528, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 16969 microseconds, and 4848 cpu microseconds.
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.590569) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 1908022 bytes OK
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.590593) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.593421) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.593434) EVENT_LOG_v1 {"time_micros": 1759409625593430, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.593451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2903126, prev total WAL file size 2903126, number of live WAL files 2.
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.594215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(1863KB)], [111(10MB)]
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625594291, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12718845, "oldest_snapshot_seqno": -1}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8166 keys, 10743450 bytes, temperature: kUnknown
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625686154, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 10743450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10690476, "index_size": 31486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 211940, "raw_average_key_size": 25, "raw_value_size": 10546677, "raw_average_value_size": 1291, "num_data_blocks": 1228, "num_entries": 8166, "num_filter_entries": 8166, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.686408) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10743450 bytes
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.688209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.4 rd, 116.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 8690, records dropped: 524 output_compression: NoCompression
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.688225) EVENT_LOG_v1 {"time_micros": 1759409625688217, "job": 70, "event": "compaction_finished", "compaction_time_micros": 91887, "compaction_time_cpu_micros": 24652, "output_level": 6, "num_output_files": 1, "total_output_size": 10743450, "num_input_records": 8690, "num_output_records": 8166, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625688619, "job": 70, "event": "table_file_deletion", "file_number": 113}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625690176, "job": 70, "event": "table_file_deletion", "file_number": 111}
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.594104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:45 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:53:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:46.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.820 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.820 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.820 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.821 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.821 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.822 2 INFO nova.compute.manager [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Terminating instance#033[00m
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.823 2 DEBUG nova.compute.manager [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:53:46 np0005466030 kernel: tapd3265627-45 (unregistering): left promiscuous mode
Oct  2 08:53:46 np0005466030 NetworkManager[44960]: <info>  [1759409626.9027] device (tapd3265627-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:53:46 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:46Z|00598|binding|INFO|Releasing lport d3265627-45dd-403c-990b-451562559afe from this chassis (sb_readonly=0)
Oct  2 08:53:46 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:46Z|00599|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe down in Southbound
Oct  2 08:53:46 np0005466030 ovn_controller[129257]: 2025-10-02T12:53:46Z|00600|binding|INFO|Removing iface tapd3265627-45 ovn-installed in OVS
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:46 np0005466030 nova_compute[230518]: 2025-10-02 12:53:46.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.954 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ff:5d 10.100.0.6'], port_security=['fa:16:3e:a5:ff:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a1440a2f-0663-451f-bef5-bbece30acc40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '9', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=d3265627-45dd-403c-990b-451562559afe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:53:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.955 138374 INFO neutron.agent.ovn.metadata.agent [-] Port d3265627-45dd-403c-990b-451562559afe in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis#033[00m
Oct  2 08:53:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.956 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:53:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.957 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f398423d-f9af-43b7-9fe3-cf78f2321395]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.958 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace which is not needed anymore#033[00m
Oct  2 08:53:46 np0005466030 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct  2 08:53:46 np0005466030 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000008a.scope: Consumed 9.395s CPU time.
Oct  2 08:53:46 np0005466030 systemd-machined[188247]: Machine qemu-69-instance-0000008a terminated.
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.067 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance destroyed successfully.#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.068 2 DEBUG nova.objects.instance [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.087 2 DEBUG nova.virt.libvirt.vif [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:53:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:53:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.089 2 DEBUG nova.network.os_vif_util [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.091 2 DEBUG nova.network.os_vif_util [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.091 2 DEBUG os_vif [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3265627-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.101 2 INFO os_vif [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45')#033[00m
Oct  2 08:53:47 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : haproxy version is 2.8.14-c23fe91
Oct  2 08:53:47 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : path to executable is /usr/sbin/haproxy
Oct  2 08:53:47 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [WARNING]  (286861) : Exiting Master process...
Oct  2 08:53:47 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [WARNING]  (286861) : Exiting Master process...
Oct  2 08:53:47 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [ALERT]    (286861) : Current worker (286863) exited with code 143 (Terminated)
Oct  2 08:53:47 np0005466030 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [WARNING]  (286861) : All workers exited. Exiting... (0)
Oct  2 08:53:47 np0005466030 systemd[1]: libpod-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371.scope: Deactivated successfully.
Oct  2 08:53:47 np0005466030 podman[287054]: 2025-10-02 12:53:47.128175291 +0000 UTC m=+0.056002488 container died 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:53:47 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371-userdata-shm.mount: Deactivated successfully.
Oct  2 08:53:47 np0005466030 systemd[1]: var-lib-containers-storage-overlay-f6218758efbae1e2167da1e023b3af2d79632b28cc528df2bdcb085fa84c90af-merged.mount: Deactivated successfully.
Oct  2 08:53:47 np0005466030 podman[287054]: 2025-10-02 12:53:47.168616311 +0000 UTC m=+0.096443508 container cleanup 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:53:47 np0005466030 systemd[1]: libpod-conmon-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371.scope: Deactivated successfully.
Oct  2 08:53:47 np0005466030 podman[287105]: 2025-10-02 12:53:47.249145046 +0000 UTC m=+0.054991346 container remove 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.256 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c80150-01a2-48b3-84a4-244eb6cc04e3]: (4, ('Thu Oct  2 12:53:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371)\n167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371\nThu Oct  2 12:53:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371)\n167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.258 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[90f61d5b-cd28-4136-ab45-6414cc4137ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.259 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466030 kernel: tap9266ebd7-30: left promiscuous mode
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.281 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f7de92-7e82-463a-bdae-941081f718da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.306 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d597723c-3787-4271-ab46-05d1c6a001fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.309 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7348140-c846-4fcd-89e0-4f05256d9011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.328 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81dd85c8-32a0-4911-a8d5-11cca9fbc7a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747632, 'reachable_time': 16840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287120, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005466030 systemd[1]: run-netns-ovnmeta\x2d9266ebd7\x2d321c\x2d4fc7\x2da6c8\x2dc1c304634bb4.mount: Deactivated successfully.
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.332 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:53:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.332 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbed848-0cef-40e5-9bd5-324767a04bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:53:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.973 2 INFO nova.virt.libvirt.driver [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deleting instance files /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40_del#033[00m
Oct  2 08:53:47 np0005466030 nova_compute[230518]: 2025-10-02 12:53:47.973 2 INFO nova.virt.libvirt.driver [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deletion of /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40_del complete#033[00m
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.044 2 INFO nova.compute.manager [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.045 2 DEBUG oslo.service.loopingcall [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.045 2 DEBUG nova.compute.manager [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.046 2 DEBUG nova.network.neutron [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:53:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:48.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.637 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409613.6351857, d70a747f-a75e-4341-89db-5953efdbbbd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.638 2 INFO nova.compute.manager [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:53:48 np0005466030 nova_compute[230518]: 2025-10-02 12:53:48.671 2 DEBUG nova.compute.manager [None req-ced838e6-4100-43e1-ae40-eb1afc1c7053 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.257 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.258 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.258 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.260 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.260 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.260 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-plugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.260 2 WARNING nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received unexpected event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:53:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:53:49.644 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.760 2 DEBUG nova.network.neutron [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.787 2 INFO nova.compute.manager [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Took 1.74 seconds to deallocate network for instance.#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.859 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.860 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:53:49 np0005466030 nova_compute[230518]: 2025-10-02 12:53:49.914 2 DEBUG oslo_concurrency.processutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:53:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:53:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3360298542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:53:50 np0005466030 nova_compute[230518]: 2025-10-02 12:53:50.398 2 DEBUG oslo_concurrency.processutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:53:50 np0005466030 nova_compute[230518]: 2025-10-02 12:53:50.405 2 DEBUG nova.compute.provider_tree [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:53:50 np0005466030 nova_compute[230518]: 2025-10-02 12:53:50.423 2 DEBUG nova.scheduler.client.report [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:53:50 np0005466030 nova_compute[230518]: 2025-10-02 12:53:50.458 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:50.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:50 np0005466030 nova_compute[230518]: 2025-10-02 12:53:50.486 2 INFO nova.scheduler.client.report [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocations for instance a1440a2f-0663-451f-bef5-bbece30acc40#033[00m
Oct  2 08:53:50 np0005466030 nova_compute[230518]: 2025-10-02 12:53:50.574 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:53:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:50.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Oct  2 08:53:51 np0005466030 nova_compute[230518]: 2025-10-02 12:53:51.376 2 DEBUG nova.compute.manager [req-85086bab-d4a7-4a7f-8674-32e09caf72e7 req-a0a3db08-5db2-498c-816d-273882f2cf9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-deleted-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:53:52 np0005466030 nova_compute[230518]: 2025-10-02 12:53:52.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:52.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:52.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:52 np0005466030 nova_compute[230518]: 2025-10-02 12:53:52.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Oct  2 08:53:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:54 np0005466030 nova_compute[230518]: 2025-10-02 12:53:54.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:54 np0005466030 nova_compute[230518]: 2025-10-02 12:53:54.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:53:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:53:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:53:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:53:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:54.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:54.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Oct  2 08:53:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Oct  2 08:53:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:53:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:56.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:53:56 np0005466030 podman[287145]: 2025-10-02 12:53:56.80343687 +0000 UTC m=+0.049311738 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  2 08:53:56 np0005466030 podman[287144]: 2025-10-02 12:53:56.834518925 +0000 UTC m=+0.083187981 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 08:53:57 np0005466030 nova_compute[230518]: 2025-10-02 12:53:57.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:57 np0005466030 nova_compute[230518]: 2025-10-02 12:53:57.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:53:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:53:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:53:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:53:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:58.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:53:59 np0005466030 nova_compute[230518]: 2025-10-02 12:53:59.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.490 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.490 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.517 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.620 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.621 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.629 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.629 2 INFO nova.compute.claims [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:54:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:00.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.702 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.702 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.726 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.785 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:00 np0005466030 nova_compute[230518]: 2025-10-02 12:54:00.940 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4268433024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.242 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.247 2 DEBUG nova.compute.provider_tree [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.274 2 DEBUG nova.scheduler.client.report [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.313 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.314 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.316 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.322 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.322 2 INFO nova.compute.claims [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.418 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.419 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.464 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.493 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.505 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.590 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.591 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.591 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Creating image(s)#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.615 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.674 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.698 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.702 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.763 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.764 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.764 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.765 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.788 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.792 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1477771253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.922 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.928 2 DEBUG nova.policy [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.934 2 DEBUG nova.compute.provider_tree [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.949 2 DEBUG nova.scheduler.client.report [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.987 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:01 np0005466030 nova_compute[230518]: 2025-10-02 12:54:01.988 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.064 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409627.063448, a1440a2f-0663-451f-bef5-bbece30acc40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.065 2 INFO nova.compute.manager [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.078 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.078 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.086 2 DEBUG nova.compute.manager [None req-13cc619e-8d96-4ed4-91fb-1d85f1981ff2 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.096 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.120 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.215 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.216 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.216 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Creating image(s)#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.244 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.269 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.294 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.298 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.361 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.362 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.363 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.363 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.391 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.394 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1e0932b-16b6-46b9-8192-b89b91e91802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:02 np0005466030 nova_compute[230518]: 2025-10-02 12:54:02.528 2 DEBUG nova.policy [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:54:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:02.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.090 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Successfully created port: f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:54:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Oct  2 08:54:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.532 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.606 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.736 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Successfully created port: 20204810-ff47-450e-80e5-23d03b435455 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.826 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1e0932b-16b6-46b9-8192-b89b91e91802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.902 2 DEBUG nova.objects.instance [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.941 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.941 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Ensure instance console log exists: /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.942 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.942 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.942 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:03 np0005466030 nova_compute[230518]: 2025-10-02 12:54:03.946 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.133 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Successfully updated port: f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.139 2 DEBUG nova.objects.instance [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.150 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.150 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Ensure instance console log exists: /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.150 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.151 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.151 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.152 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.152 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.152 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.213 2 DEBUG nova.compute.manager [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.213 2 DEBUG nova.compute.manager [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing instance network info cache due to event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.214 2 DEBUG oslo_concurrency.lockutils [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.384 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:54:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:04.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.628 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Successfully updated port: 20204810-ff47-450e-80e5-23d03b435455 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.646 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.647 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.647 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:04.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.723 2 DEBUG nova.compute.manager [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.723 2 DEBUG nova.compute.manager [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:04 np0005466030 nova_compute[230518]: 2025-10-02 12:54:04.723 2 DEBUG oslo_concurrency.lockutils [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.211 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.631 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.672 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.673 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance network_info: |[{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.674 2 DEBUG oslo_concurrency.lockutils [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.674 2 DEBUG nova.network.neutron [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.679 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start _get_guest_xml network_info=[{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.684 2 WARNING nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.688 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.688 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.691 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.691 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.693 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.693 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.693 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.694 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.694 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.694 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.695 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.695 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.695 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.696 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.696 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.696 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:54:05 np0005466030 nova_compute[230518]: 2025-10-02 12:54:05.699 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3687837290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.136 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.141 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.181 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.186 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.224 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.225 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance network_info: |[{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.226 2 DEBUG oslo_concurrency.lockutils [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.227 2 DEBUG nova.network.neutron [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.230 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start _get_guest_xml network_info=[{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.235 2 WARNING nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.241 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.242 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.245 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.245 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.246 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.246 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.247 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.247 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.249 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.249 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.249 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.250 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.253 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:06.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1284789999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.609 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.612 2 DEBUG nova.virt.libvirt.vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1578945059',display_name='tempest-TestNetworkBasicOps-server-1578945059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1578945059',id=145,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIapgtoYYA99v/havIqzWJIpGUB3YFecKSBWtCRKGjdMREGYtjf88dqlPzDhmBAjJKkZ8FXg34wGrVA0nOzc0vpbWnm6iI8pilmg2YswkyP+t9hDFvcoygSa2fIr9gTuTw==',key_name='tempest-TestNetworkBasicOps-1376701642',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-o7vn37r9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:01Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=95fd2a5f-82d9-46eb-b218-cb0a9a4e2765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.613 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.615 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.617 2 DEBUG nova.objects.instance [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.645 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <uuid>95fd2a5f-82d9-46eb-b218-cb0a9a4e2765</uuid>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <name>instance-00000091</name>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkBasicOps-server-1578945059</nova:name>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:54:05</nova:creationTime>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <nova:port uuid="f67b8436-7ef3-4d35-814c-3d62c9a9fec0">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <entry name="serial">95fd2a5f-82d9-46eb-b218-cb0a9a4e2765</entry>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <entry name="uuid">95fd2a5f-82d9-46eb-b218-cb0a9a4e2765</entry>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:49:9e:90"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <target dev="tapf67b8436-7e"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/console.log" append="off"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:54:06 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:54:06 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:54:06 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:54:06 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.648 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Preparing to wait for external event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.648 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.649 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.649 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.651 2 DEBUG nova.virt.libvirt.vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1578945059',display_name='tempest-TestNetworkBasicOps-server-1578945059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1578945059',id=145,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIapgtoYYA99v/havIqzWJIpGUB3YFecKSBWtCRKGjdMREGYtjf88dqlPzDhmBAjJKkZ8FXg34wGrVA0nOzc0vpbWnm6iI8pilmg2YswkyP+t9hDFvcoygSa2fIr9gTuTw==',key_name='tempest-TestNetworkBasicOps-1376701642',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-o7vn37r9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:01Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=95fd2a5f-82d9-46eb-b218-cb0a9a4e2765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.652 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.653 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.654 2 DEBUG os_vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:06.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf67b8436-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.667 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf67b8436-7e, col_values=(('external_ids', {'iface-id': 'f67b8436-7ef3-4d35-814c-3d62c9a9fec0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:9e:90', 'vm-uuid': '95fd2a5f-82d9-46eb-b218-cb0a9a4e2765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:06 np0005466030 NetworkManager[44960]: <info>  [1759409646.6705] manager: (tapf67b8436-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.677 2 INFO os_vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e')#033[00m
Oct  2 08:54:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/651591536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.728 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.758 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.764 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:06 np0005466030 podman[287650]: 2025-10-02 12:54:06.811973036 +0000 UTC m=+0.091478511 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:54:06 np0005466030 podman[287653]: 2025-10-02 12:54:06.812947596 +0000 UTC m=+0.078825074 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.852 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.853 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.853 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:49:9e:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.853 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Using config drive#033[00m
Oct  2 08:54:06 np0005466030 nova_compute[230518]: 2025-10-02 12:54:06.878 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3023350745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.205 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.207 2 DEBUG nova.virt.libvirt.vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:02Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.207 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.208 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.209 2 DEBUG nova.objects.instance [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.227 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <uuid>a1e0932b-16b6-46b9-8192-b89b91e91802</uuid>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <name>instance-00000092</name>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1723654799</nova:name>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:54:06</nova:creationTime>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <nova:port uuid="20204810-ff47-450e-80e5-23d03b435455">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <entry name="serial">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <entry name="uuid">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:5b:41:1c"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <target dev="tap20204810-ff"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log" append="off"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:54:07 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:54:07 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:54:07 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:54:07 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Preparing to wait for external event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.229 2 DEBUG nova.virt.libvirt.vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:02Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.229 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.229 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.230 2 DEBUG os_vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20204810-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20204810-ff, col_values=(('external_ids', {'iface-id': '20204810-ff47-450e-80e5-23d03b435455', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:41:1c', 'vm-uuid': 'a1e0932b-16b6-46b9-8192-b89b91e91802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466030 NetworkManager[44960]: <info>  [1759409647.2375] manager: (tap20204810-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.244 2 INFO os_vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.329 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.330 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.330 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:5b:41:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.330 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Using config drive#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.355 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.573 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Creating config drive at /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.580 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2ngchn4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.647 2 DEBUG nova.network.neutron [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated VIF entry in instance network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.648 2 DEBUG nova.network.neutron [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.663 2 DEBUG oslo_concurrency.lockutils [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.741 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2ngchn4" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.774 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.777 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.986 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:07 np0005466030 nova_compute[230518]: 2025-10-02 12:54:07.987 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deleting local config drive /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config because it was imported into RBD.#033[00m
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.0354] manager: (tapf67b8436-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Oct  2 08:54:08 np0005466030 kernel: tapf67b8436-7e: entered promiscuous mode
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00601|binding|INFO|Claiming lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for this chassis.
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00602|binding|INFO|f67b8436-7ef3-4d35-814c-3d62c9a9fec0: Claiming fa:16:3e:49:9e:90 10.100.0.5
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.051 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9e:90 10.100.0.5'], port_security=['fa:16:3e:49:9e:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95fd2a5f-82d9-46eb-b218-cb0a9a4e2765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83bbba21-a002-4973-9f29-252bf270271b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33cdd8ea-2edc-4577-8235-b9f26b2b357e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddb9b596-76b5-41a0-897b-33f6aa75f8df, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f67b8436-7ef3-4d35-814c-3d62c9a9fec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.052 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 in datapath 83bbba21-a002-4973-9f29-252bf270271b bound to our chassis#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 83bbba21-a002-4973-9f29-252bf270271b#033[00m
Oct  2 08:54:08 np0005466030 systemd-machined[188247]: New machine qemu-70-instance-00000091.
Oct  2 08:54:08 np0005466030 systemd-udevd[287827]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b59e0a-c67d-4fc8-a0ae-2135a893e950]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.069 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap83bbba21-a1 in ovnmeta-83bbba21-a002-4973-9f29-252bf270271b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.071 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap83bbba21-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb6a685-08cf-4f4d-896b-c49ea447f9da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.072 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3dcb29-0435-4a24-b2ba-d73bbb742186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.083 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b48cc5e2-b2b8-4f36-ba8e-5828c0a71d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 systemd[1]: Started Virtual Machine qemu-70-instance-00000091.
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.0850] device (tapf67b8436-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.0860] device (tapf67b8436-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.102 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c60dbf00-77e0-405a-b44b-a01720cf53ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00603|binding|INFO|Setting lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 ovn-installed in OVS
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00604|binding|INFO|Setting lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 up in Southbound
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.139 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f323b6ce-ee4e-4536-92b1-2d2a0eca75d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.143 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1721b1b7-f389-434c-b59f-761cc210087c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.1444] manager: (tap83bbba21-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Oct  2 08:54:08 np0005466030 systemd-udevd[287830]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.175 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a58a2dd7-54e7-413f-aee5-749c773fa6d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.178 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[52f604d0-e5d8-4070-a5a3-6bc24aa8ee3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.2008] device (tap83bbba21-a0): carrier: link connected
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.206 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[514daa98-2afe-4e3f-ac8f-37f485fb09fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.218 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Creating config drive at /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.223 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[221c7680-dd27-46ee-8b01-e184c9e1fb34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83bbba21-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c8:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751175, 'reachable_time': 23991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287863, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.228 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzc5e4lmv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.244 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f16bf7e3-aff9-467b-8b0c-7af60009b097]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:c86e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751175, 'tstamp': 751175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287864, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.267 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[573846cf-e2c2-476d-8513-f326ec0ce206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83bbba21-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c8:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751175, 'reachable_time': 23991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287866, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.294 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[848e978c-6d88-477a-bcdc-0a6e40744b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.362 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c059ec-ae7d-4b5d-9b3e-6854c9df1d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83bbba21-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83bbba21-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.3667] manager: (tap83bbba21-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.369 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzc5e4lmv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:08 np0005466030 kernel: tap83bbba21-a0: entered promiscuous mode
Oct  2 08:54:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.372 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap83bbba21-a0, col_values=(('external_ids', {'iface-id': '3c77c977-5d61-489e-bb52-04238449aff2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00605|binding|INFO|Releasing lport 3c77c977-5d61-489e-bb52-04238449aff2 from this chassis (sb_readonly=0)
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.396 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.402 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/83bbba21-a002-4973-9f29-252bf270271b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/83bbba21-a002-4973-9f29-252bf270271b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.404 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0d55eb-3de2-4d5f-8618-b792c1d57a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.404 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-83bbba21-a002-4973-9f29-252bf270271b
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/83bbba21-a002-4973-9f29-252bf270271b.pid.haproxy
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 83bbba21-a002-4973-9f29-252bf270271b
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.404 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.407 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'env', 'PROCESS_TAG=haproxy-83bbba21-a002-4973-9f29-252bf270271b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/83bbba21-a002-4973-9f29-252bf270271b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.443 2 DEBUG nova.compute.manager [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG oslo_concurrency.lockutils [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG oslo_concurrency.lockutils [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG oslo_concurrency.lockutils [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG nova.compute.manager [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Processing event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:54:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:08.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.516 2 DEBUG nova.network.neutron [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.516 2 DEBUG nova.network.neutron [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.536 2 DEBUG oslo_concurrency.lockutils [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:54:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532031233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:54:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.678 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.678 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deleting local config drive /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config because it was imported into RBD.#033[00m
Oct  2 08:54:08 np0005466030 kernel: tap20204810-ff: entered promiscuous mode
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.7362] manager: (tap20204810-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Oct  2 08:54:08 np0005466030 systemd-udevd[287850]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00606|binding|INFO|Claiming lport 20204810-ff47-450e-80e5-23d03b435455 for this chassis.
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00607|binding|INFO|20204810-ff47-450e-80e5-23d03b435455: Claiming fa:16:3e:5b:41:1c 10.100.0.7
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.750 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.7569] device (tap20204810-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:54:08 np0005466030 NetworkManager[44960]: <info>  [1759409648.7576] device (tap20204810-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:54:08 np0005466030 podman[287944]: 2025-10-02 12:54:08.782223859 +0000 UTC m=+0.059524759 container create dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:54:08 np0005466030 systemd-machined[188247]: New machine qemu-71-instance-00000092.
Oct  2 08:54:08 np0005466030 systemd[1]: Started Virtual Machine qemu-71-instance-00000092.
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00608|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 ovn-installed in OVS
Oct  2 08:54:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:08Z|00609|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 up in Southbound
Oct  2 08:54:08 np0005466030 nova_compute[230518]: 2025-10-02 12:54:08.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:08 np0005466030 systemd[1]: Started libpod-conmon-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f.scope.
Oct  2 08:54:08 np0005466030 podman[287944]: 2025-10-02 12:54:08.748064656 +0000 UTC m=+0.025365576 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:54:08 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:54:08 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2b6de8b1fd368c961dc50bb0264389c886212a783acf9ddb0316ed806440300/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:54:08 np0005466030 podman[287944]: 2025-10-02 12:54:08.863348084 +0000 UTC m=+0.140649004 container init dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:54:08 np0005466030 podman[287944]: 2025-10-02 12:54:08.868737892 +0000 UTC m=+0.146038792 container start dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:54:08 np0005466030 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : New worker (287999) forked
Oct  2 08:54:08 np0005466030 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : Loading success.
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.933 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 unbound from our chassis#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.934 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a39243cb-5286-4429-8879-7b4d535de128#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.944 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a07b690-f4b5-4f5c-8114-f0bc41126902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.944 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa39243cb-51 in ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.946 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa39243cb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.946 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d35c3092-50bf-4853-87ec-d2db86198c08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.947 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a4999c-0336-4701-a3d4-a4cec884be04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.961 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[82bd8975-793f-46a9-a0ac-a59d39551210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.985 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[591d4651-ed02-4a3e-b141-9e2f03320128]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.014 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b051a576-abff-4cbe-801f-9719f4a5fb4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.022 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c4c895-03b1-4108-9dd6-b72e07db2b92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 NetworkManager[44960]: <info>  [1759409649.0233] manager: (tapa39243cb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/284)
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.054 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[224aadee-e422-4805-9519-e8e90c5124b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.057 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[278d9607-71d3-4e0b-a41e-4b65b6b03eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.078 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.078 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:09 np0005466030 NetworkManager[44960]: <info>  [1759409649.0891] device (tapa39243cb-50): carrier: link connected
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.094 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5060ca89-19e5-46bb-a6ea-8154edf23a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.118 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47e81210-97c8-4ec5-a3bb-276f593c6934]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751264, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288043, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.133 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d971a80e-4f9f-4ba6-a192-53c0e555f999]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:6ac8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751264, 'tstamp': 751264}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288044, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.153 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d44b8436-822e-4dd7-96b5-4b9974d880e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751264, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288046, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.187 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72cd5559-6529-45cc-8617-f2c06d0577d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.222 2 DEBUG nova.compute.manager [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.223 2 DEBUG oslo_concurrency.lockutils [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.223 2 DEBUG oslo_concurrency.lockutils [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.223 2 DEBUG oslo_concurrency.lockutils [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.224 2 DEBUG nova.compute.manager [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Processing event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.259 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6828ab1f-9748-4b4f-869c-ba0085f7f370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.260 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.261 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.261 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa39243cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:09 np0005466030 kernel: tapa39243cb-50: entered promiscuous mode
Oct  2 08:54:09 np0005466030 NetworkManager[44960]: <info>  [1759409649.2637] manager: (tapa39243cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.265 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa39243cb-50, col_values=(('external_ids', {'iface-id': '75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:09 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:09Z|00610|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.285 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.286 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d034edaf-0199-4c2e-9ac5-3542d8c584a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.287 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-a39243cb-5286-4429-8879-7b4d535de128
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID a39243cb-5286-4429-8879-7b4d535de128
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:54:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.289 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'env', 'PROCESS_TAG=haproxy-a39243cb-5286-4429-8879-7b4d535de128', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a39243cb-5286-4429-8879-7b4d535de128.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.446 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.448 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.4453878, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.449 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Started (Lifecycle Event)#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.452 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.457 2 INFO nova.virt.libvirt.driver [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance spawned successfully.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.457 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.475 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.480 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.485 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.486 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.487 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.487 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.488 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.489 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/498936511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.514 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.515 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.4471543, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.515 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.520 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.541 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.548 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.4538093, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.549 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.556 2 INFO nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 7.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.556 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.571 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.575 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.642 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.653 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.654 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:09 np0005466030 podman[288140]: 2025-10-02 12:54:09.654674249 +0000 UTC m=+0.052343002 container create b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.665 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.665 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.692 2 INFO nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 9.11 seconds to build instance.#033[00m
Oct  2 08:54:09 np0005466030 systemd[1]: Started libpod-conmon-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699.scope.
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.714 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:09 np0005466030 podman[288140]: 2025-10-02 12:54:09.629051805 +0000 UTC m=+0.026720588 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:54:09 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:54:09 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134e48fef8ee40dfc4155c6ec896c104676820b240dcee32a40405e7898ee537/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.732 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.7319562, a1e0932b-16b6-46b9-8192-b89b91e91802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.732 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Started (Lifecycle Event)#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.734 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:54:09 np0005466030 podman[288140]: 2025-10-02 12:54:09.73787633 +0000 UTC m=+0.135545093 container init b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.739 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.742 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance spawned successfully.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.743 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:54:09 np0005466030 podman[288140]: 2025-10-02 12:54:09.746546992 +0000 UTC m=+0.144215745 container start b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.768 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.774 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:09 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : New worker (288161) forked
Oct  2 08:54:09 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : Loading success.
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.782 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.782 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.783 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.783 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.784 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.784 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.816 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.817 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.7344809, a1e0932b-16b6-46b9-8192-b89b91e91802 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.817 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.871 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.873 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.7373836, a1e0932b-16b6-46b9-8192-b89b91e91802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.873 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.878 2 INFO nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 7.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.878 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.888 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4272MB free_disk=20.90142822265625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.889 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.890 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.891 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.895 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.921 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.970 2 INFO nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 9.04 seconds to build instance.#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.988 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.989 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a1e0932b-16b6-46b9-8192-b89b91e91802 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.989 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.990 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:54:09 np0005466030 nova_compute[230518]: 2025-10-02 12:54:09.994 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.037 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2087824584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.484 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.490 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:10.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.507 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.552 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.553 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.595 2 DEBUG nova.compute.manager [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.596 2 DEBUG oslo_concurrency.lockutils [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.597 2 DEBUG oslo_concurrency.lockutils [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.597 2 DEBUG oslo_concurrency.lockutils [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.597 2 DEBUG nova.compute.manager [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] No waiting events found dispatching network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:10 np0005466030 nova_compute[230518]: 2025-10-02 12:54:10.598 2 WARNING nova.compute.manager [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received unexpected event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:54:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:10.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:11 np0005466030 nova_compute[230518]: 2025-10-02 12:54:11.417 2 DEBUG nova.compute.manager [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:11 np0005466030 nova_compute[230518]: 2025-10-02 12:54:11.418 2 DEBUG oslo_concurrency.lockutils [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:11 np0005466030 nova_compute[230518]: 2025-10-02 12:54:11.419 2 DEBUG oslo_concurrency.lockutils [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:11 np0005466030 nova_compute[230518]: 2025-10-02 12:54:11.419 2 DEBUG oslo_concurrency.lockutils [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:11 np0005466030 nova_compute[230518]: 2025-10-02 12:54:11.420 2 DEBUG nova.compute.manager [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:11 np0005466030 nova_compute[230518]: 2025-10-02 12:54:11.420 2 WARNING nova.compute.manager [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:12.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:54:12 np0005466030 NetworkManager[44960]: <info>  [1759409652.5749] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct  2 08:54:12 np0005466030 NetworkManager[44960]: <info>  [1759409652.5759] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:12Z|00611|binding|INFO|Releasing lport 3c77c977-5d61-489e-bb52-04238449aff2 from this chassis (sb_readonly=0)
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:12.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:12Z|00612|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.958 2 DEBUG nova.compute.manager [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.958 2 DEBUG nova.compute.manager [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing instance network info cache due to event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.959 2 DEBUG oslo_concurrency.lockutils [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.959 2 DEBUG oslo_concurrency.lockutils [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:12 np0005466030 nova_compute[230518]: 2025-10-02 12:54:12.959 2 DEBUG nova.network.neutron [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:13 np0005466030 nova_compute[230518]: 2025-10-02 12:54:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:14 np0005466030 nova_compute[230518]: 2025-10-02 12:54:14.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:14.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:14.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:16 np0005466030 nova_compute[230518]: 2025-10-02 12:54:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:16 np0005466030 nova_compute[230518]: 2025-10-02 12:54:16.336 2 DEBUG nova.compute.manager [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:16 np0005466030 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG nova.compute.manager [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:16 np0005466030 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG oslo_concurrency.lockutils [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:16 np0005466030 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG oslo_concurrency.lockutils [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:16 np0005466030 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG nova.network.neutron [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:16.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:17 np0005466030 nova_compute[230518]: 2025-10-02 12:54:17.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:17 np0005466030 nova_compute[230518]: 2025-10-02 12:54:17.674 2 DEBUG nova.network.neutron [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated VIF entry in instance network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:17 np0005466030 nova_compute[230518]: 2025-10-02 12:54:17.674 2 DEBUG nova.network.neutron [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:17 np0005466030 nova_compute[230518]: 2025-10-02 12:54:17.700 2 DEBUG oslo_concurrency.lockutils [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:18 np0005466030 nova_compute[230518]: 2025-10-02 12:54:18.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:18 np0005466030 nova_compute[230518]: 2025-10-02 12:54:18.726 2 DEBUG nova.network.neutron [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:18 np0005466030 nova_compute[230518]: 2025-10-02 12:54:18.726 2 DEBUG nova.network.neutron [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:18 np0005466030 nova_compute[230518]: 2025-10-02 12:54:18.756 2 DEBUG oslo_concurrency.lockutils [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:19 np0005466030 nova_compute[230518]: 2025-10-02 12:54:19.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:20 np0005466030 nova_compute[230518]: 2025-10-02 12:54:20.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:21 np0005466030 nova_compute[230518]: 2025-10-02 12:54:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:21 np0005466030 nova_compute[230518]: 2025-10-02 12:54:21.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:54:21 np0005466030 nova_compute[230518]: 2025-10-02 12:54:21.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:54:21 np0005466030 nova_compute[230518]: 2025-10-02 12:54:21.189 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:21 np0005466030 nova_compute[230518]: 2025-10-02 12:54:21.190 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:21 np0005466030 nova_compute[230518]: 2025-10-02 12:54:21.190 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:54:21 np0005466030 nova_compute[230518]: 2025-10-02 12:54:21.190 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:22 np0005466030 nova_compute[230518]: 2025-10-02 12:54:22.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:22 np0005466030 nova_compute[230518]: 2025-10-02 12:54:22.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:22.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:22 np0005466030 nova_compute[230518]: 2025-10-02 12:54:22.971 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:22 np0005466030 nova_compute[230518]: 2025-10-02 12:54:22.993 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:22 np0005466030 nova_compute[230518]: 2025-10-02 12:54:22.993 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:54:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:54:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:54:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:54:24 np0005466030 nova_compute[230518]: 2025-10-02 12:54:24.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:24 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct  2 08:54:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:24.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:25Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:9e:90 10.100.0.5
Oct  2 08:54:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:25Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:9e:90 10.100.0.5
Oct  2 08:54:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:25.951 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:25.952 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:25Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:41:1c 10.100.0.7
Oct  2 08:54:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:25 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:25Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:41:1c 10.100.0.7
Oct  2 08:54:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:26.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:26 np0005466030 nova_compute[230518]: 2025-10-02 12:54:26.988 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:54:27 np0005466030 nova_compute[230518]: 2025-10-02 12:54:27.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:27 np0005466030 podman[288325]: 2025-10-02 12:54:27.80602397 +0000 UTC m=+0.048606426 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:54:27 np0005466030 podman[288324]: 2025-10-02 12:54:27.836462744 +0000 UTC m=+0.084276234 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  2 08:54:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:28.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:28.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:29 np0005466030 nova_compute[230518]: 2025-10-02 12:54:29.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:30.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:30.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:31 np0005466030 nova_compute[230518]: 2025-10-02 12:54:31.657 2 INFO nova.compute.manager [None req-60d8673b-4135-4183-a6b7-61d918ee6dc3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Get console output#033[00m
Oct  2 08:54:31 np0005466030 nova_compute[230518]: 2025-10-02 12:54:31.664 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.121 2 INFO nova.compute.manager [None req-548e69c7-4b4a-4f8b-90e5-53d38bf68e87 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Get console output#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.126 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.485 2 INFO nova.compute.manager [None req-dec6a418-f356-4db7-ab51-369eff8a0178 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Get console output#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.491 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:54:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:32.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.964 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.965 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.965 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.966 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.966 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.967 2 INFO nova.compute.manager [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Terminating instance#033[00m
Oct  2 08:54:32 np0005466030 nova_compute[230518]: 2025-10-02 12:54:32.968 2 DEBUG nova.compute.manager [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:54:33 np0005466030 kernel: tapf67b8436-7e (unregistering): left promiscuous mode
Oct  2 08:54:33 np0005466030 NetworkManager[44960]: <info>  [1759409673.0304] device (tapf67b8436-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.035 2 DEBUG nova.compute.manager [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.035 2 DEBUG nova.compute.manager [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing instance network info cache due to event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.035 2 DEBUG oslo_concurrency.lockutils [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.036 2 DEBUG oslo_concurrency.lockutils [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.036 2 DEBUG nova.network.neutron [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:33Z|00613|binding|INFO|Releasing lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 from this chassis (sb_readonly=0)
Oct  2 08:54:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:33Z|00614|binding|INFO|Setting lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 down in Southbound
Oct  2 08:54:33 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:33Z|00615|binding|INFO|Removing iface tapf67b8436-7e ovn-installed in OVS
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.051 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9e:90 10.100.0.5'], port_security=['fa:16:3e:49:9e:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95fd2a5f-82d9-46eb-b218-cb0a9a4e2765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83bbba21-a002-4973-9f29-252bf270271b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33cdd8ea-2edc-4577-8235-b9f26b2b357e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddb9b596-76b5-41a0-897b-33f6aa75f8df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f67b8436-7ef3-4d35-814c-3d62c9a9fec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 in datapath 83bbba21-a002-4973-9f29-252bf270271b unbound from our chassis#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.054 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83bbba21-a002-4973-9f29-252bf270271b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.055 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d34a2138-06e7-441d-9456-8e03d31784d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.056 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-83bbba21-a002-4973-9f29-252bf270271b namespace which is not needed anymore#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct  2 08:54:33 np0005466030 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000091.scope: Consumed 14.386s CPU time.
Oct  2 08:54:33 np0005466030 systemd-machined[188247]: Machine qemu-70-instance-00000091 terminated.
Oct  2 08:54:33 np0005466030 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : haproxy version is 2.8.14-c23fe91
Oct  2 08:54:33 np0005466030 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : path to executable is /usr/sbin/haproxy
Oct  2 08:54:33 np0005466030 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [WARNING]  (287986) : Exiting Master process...
Oct  2 08:54:33 np0005466030 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [ALERT]    (287986) : Current worker (287999) exited with code 143 (Terminated)
Oct  2 08:54:33 np0005466030 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [WARNING]  (287986) : All workers exited. Exiting... (0)
Oct  2 08:54:33 np0005466030 systemd[1]: libpod-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f.scope: Deactivated successfully.
Oct  2 08:54:33 np0005466030 podman[288440]: 2025-10-02 12:54:33.196453514 +0000 UTC m=+0.047214072 container died dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.212 2 INFO nova.virt.libvirt.driver [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance destroyed successfully.#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.213 2 DEBUG nova.objects.instance [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:54:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f-userdata-shm.mount: Deactivated successfully.
Oct  2 08:54:33 np0005466030 systemd[1]: var-lib-containers-storage-overlay-b2b6de8b1fd368c961dc50bb0264389c886212a783acf9ddb0316ed806440300-merged.mount: Deactivated successfully.
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.226 2 DEBUG nova.virt.libvirt.vif [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1578945059',display_name='tempest-TestNetworkBasicOps-server-1578945059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1578945059',id=145,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIapgtoYYA99v/havIqzWJIpGUB3YFecKSBWtCRKGjdMREGYtjf88dqlPzDhmBAjJKkZ8FXg34wGrVA0nOzc0vpbWnm6iI8pilmg2YswkyP+t9hDFvcoygSa2fIr9gTuTw==',key_name='tempest-TestNetworkBasicOps-1376701642',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-o7vn37r9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:09Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=95fd2a5f-82d9-46eb-b218-cb0a9a4e2765,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.226 2 DEBUG nova.network.os_vif_util [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.227 2 DEBUG nova.network.os_vif_util [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.227 2 DEBUG os_vif [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf67b8436-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.237 2 INFO os_vif [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e')#033[00m
Oct  2 08:54:33 np0005466030 podman[288440]: 2025-10-02 12:54:33.241002851 +0000 UTC m=+0.091763409 container cleanup dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:54:33 np0005466030 systemd[1]: libpod-conmon-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f.scope: Deactivated successfully.
Oct  2 08:54:33 np0005466030 podman[288496]: 2025-10-02 12:54:33.319000549 +0000 UTC m=+0.047956836 container remove dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48d915f8-cbac-44da-85af-39c360029aee]: (4, ('Thu Oct  2 12:54:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b (dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f)\ndcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f\nThu Oct  2 12:54:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b (dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f)\ndcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.330 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9ac330-c7f0-4fc7-9d97-ae573a375a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.331 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83bbba21-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 kernel: tap83bbba21-a0: left promiscuous mode
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.352 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[22393f4c-e31f-420d-bdc4-d21667605fec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.385 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f89790-d461-4a9a-a74a-ec14c85a00d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.387 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ddbbc5-0894-4d31-b3eb-23ac22aa6096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.410 2 DEBUG nova.compute.manager [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-unplugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG oslo_concurrency.lockutils [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG oslo_concurrency.lockutils [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG oslo_concurrency.lockutils [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG nova.compute.manager [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] No waiting events found dispatching network-vif-unplugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:33 np0005466030 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG nova.compute.manager [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-unplugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.411 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0b07b7-de7a-48e5-9a95-c0a834995b2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751169, 'reachable_time': 32152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288514, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.416 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-83bbba21-a002-4973-9f29-252bf270271b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:54:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.416 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc86ec4-e8de-4e10-9089-960c0720f8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:33 np0005466030 systemd[1]: run-netns-ovnmeta\x2d83bbba21\x2da002\x2d4973\x2d9f29\x2d252bf270271b.mount: Deactivated successfully.
Oct  2 08:54:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:54:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:54:34 np0005466030 nova_compute[230518]: 2025-10-02 12:54:34.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:34 np0005466030 nova_compute[230518]: 2025-10-02 12:54:34.454 2 INFO nova.virt.libvirt.driver [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deleting instance files /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_del#033[00m
Oct  2 08:54:34 np0005466030 nova_compute[230518]: 2025-10-02 12:54:34.455 2 INFO nova.virt.libvirt.driver [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deletion of /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_del complete#033[00m
Oct  2 08:54:34 np0005466030 nova_compute[230518]: 2025-10-02 12:54:34.511 2 INFO nova.compute.manager [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 1.54 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:54:34 np0005466030 nova_compute[230518]: 2025-10-02 12:54:34.512 2 DEBUG oslo.service.loopingcall [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:54:34 np0005466030 nova_compute[230518]: 2025-10-02 12:54:34.512 2 DEBUG nova.compute.manager [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:54:34 np0005466030 nova_compute[230518]: 2025-10-02 12:54:34.513 2 DEBUG nova.network.neutron [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:54:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:34.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.528 2 DEBUG nova.compute.manager [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.529 2 DEBUG oslo_concurrency.lockutils [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.529 2 DEBUG oslo_concurrency.lockutils [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.529 2 DEBUG oslo_concurrency.lockutils [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.530 2 DEBUG nova.compute.manager [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] No waiting events found dispatching network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.530 2 WARNING nova.compute.manager [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received unexpected event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.589 2 INFO nova.compute.manager [None req-2a35d582-da9f-448a-86ed-e3f233f32d76 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Get console output#033[00m
Oct  2 08:54:35 np0005466030 nova_compute[230518]: 2025-10-02 12:54:35.594 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:54:36 np0005466030 nova_compute[230518]: 2025-10-02 12:54:36.251 2 DEBUG nova.network.neutron [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated VIF entry in instance network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:36 np0005466030 nova_compute[230518]: 2025-10-02 12:54:36.251 2 DEBUG nova.network.neutron [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:36 np0005466030 nova_compute[230518]: 2025-10-02 12:54:36.279 2 DEBUG oslo_concurrency.lockutils [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.208 2 DEBUG nova.network.neutron [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.226 2 INFO nova.compute.manager [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 2.71 seconds to deallocate network for instance.#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.270 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.270 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.332 2 DEBUG oslo_concurrency.processutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.575 2 DEBUG nova.compute.manager [req-989ad7a8-1354-4551-8b8a-938561ba9b3e req-ab1f2182-3b18-496f-b9b4-b54c1864aaf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-deleted-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:54:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/365730846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.767 2 DEBUG oslo_concurrency.processutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.772 2 DEBUG nova.compute.provider_tree [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:54:37 np0005466030 podman[288536]: 2025-10-02 12:54:37.798000448 +0000 UTC m=+0.053342285 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct  2 08:54:37 np0005466030 podman[288537]: 2025-10-02 12:54:37.798128272 +0000 UTC m=+0.051042002 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.808 2 DEBUG nova.scheduler.client.report [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.836 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.867 2 INFO nova.scheduler.client.report [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765#033[00m
Oct  2 08:54:37 np0005466030 nova_compute[230518]: 2025-10-02 12:54:37.935 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:38 np0005466030 nova_compute[230518]: 2025-10-02 12:54:38.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:38.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:54:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:38.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:54:39 np0005466030 nova_compute[230518]: 2025-10-02 12:54:39.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:39 np0005466030 nova_compute[230518]: 2025-10-02 12:54:39.309 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:39 np0005466030 nova_compute[230518]: 2025-10-02 12:54:39.309 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:39 np0005466030 nova_compute[230518]: 2025-10-02 12:54:39.310 2 DEBUG nova.network.neutron [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:40.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:41 np0005466030 nova_compute[230518]: 2025-10-02 12:54:41.444 2 DEBUG nova.network.neutron [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:41 np0005466030 nova_compute[230518]: 2025-10-02 12:54:41.472 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:41 np0005466030 nova_compute[230518]: 2025-10-02 12:54:41.605 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 08:54:41 np0005466030 nova_compute[230518]: 2025-10-02 12:54:41.605 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Creating file /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 08:54:41 np0005466030 nova_compute[230518]: 2025-10-02 12:54:41.606 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:42 np0005466030 nova_compute[230518]: 2025-10-02 12:54:42.047 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp" returned: 1 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:42 np0005466030 nova_compute[230518]: 2025-10-02 12:54:42.048 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 08:54:42 np0005466030 nova_compute[230518]: 2025-10-02 12:54:42.048 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Creating directory /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 08:54:42 np0005466030 nova_compute[230518]: 2025-10-02 12:54:42.049 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:54:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:42.120 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:42.121 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:54:42 np0005466030 nova_compute[230518]: 2025-10-02 12:54:42.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:42 np0005466030 nova_compute[230518]: 2025-10-02 12:54:42.254 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:54:42 np0005466030 nova_compute[230518]: 2025-10-02 12:54:42.258 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:54:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:42.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:43.123 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:43 np0005466030 nova_compute[230518]: 2025-10-02 12:54:43.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:43Z|00616|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct  2 08:54:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:43 np0005466030 nova_compute[230518]: 2025-10-02 12:54:43.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466030 kernel: tap20204810-ff (unregistering): left promiscuous mode
Oct  2 08:54:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:44 np0005466030 NetworkManager[44960]: <info>  [1759409684.5588] device (tap20204810-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:44Z|00617|binding|INFO|Releasing lport 20204810-ff47-450e-80e5-23d03b435455 from this chassis (sb_readonly=0)
Oct  2 08:54:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:44Z|00618|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 down in Southbound
Oct  2 08:54:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:54:44Z|00619|binding|INFO|Removing iface tap20204810-ff ovn-installed in OVS
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.618 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.619 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 unbound from our chassis#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.620 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a39243cb-5286-4429-8879-7b4d535de128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.621 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2641b3e-1a26-4a45-aded-320885fdd230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.621 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace which is not needed anymore#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466030 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct  2 08:54:44 np0005466030 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000092.scope: Consumed 13.947s CPU time.
Oct  2 08:54:44 np0005466030 systemd-machined[188247]: Machine qemu-71-instance-00000092 terminated.
Oct  2 08:54:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:44 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : haproxy version is 2.8.14-c23fe91
Oct  2 08:54:44 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : path to executable is /usr/sbin/haproxy
Oct  2 08:54:44 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [WARNING]  (288159) : Exiting Master process...
Oct  2 08:54:44 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [WARNING]  (288159) : Exiting Master process...
Oct  2 08:54:44 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [ALERT]    (288159) : Current worker (288161) exited with code 143 (Terminated)
Oct  2 08:54:44 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [WARNING]  (288159) : All workers exited. Exiting... (0)
Oct  2 08:54:44 np0005466030 systemd[1]: libpod-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699.scope: Deactivated successfully.
Oct  2 08:54:44 np0005466030 podman[288605]: 2025-10-02 12:54:44.744465607 +0000 UTC m=+0.044928280 container died b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 08:54:44 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699-userdata-shm.mount: Deactivated successfully.
Oct  2 08:54:44 np0005466030 systemd[1]: var-lib-containers-storage-overlay-134e48fef8ee40dfc4155c6ec896c104676820b240dcee32a40405e7898ee537-merged.mount: Deactivated successfully.
Oct  2 08:54:44 np0005466030 podman[288605]: 2025-10-02 12:54:44.787399504 +0000 UTC m=+0.087862177 container cleanup b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:54:44 np0005466030 systemd[1]: libpod-conmon-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699.scope: Deactivated successfully.
Oct  2 08:54:44 np0005466030 podman[288646]: 2025-10-02 12:54:44.848402608 +0000 UTC m=+0.039414068 container remove b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.856 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0c40e2-3cbc-4ba3-a793-3659f7484ac8]: (4, ('Thu Oct  2 12:54:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699)\nb86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699\nThu Oct  2 12:54:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699)\nb86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.857 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e80bc918-6f05-4328-8be0-f421ab26c9aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.858 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466030 kernel: tapa39243cb-50: left promiscuous mode
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.882 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c219a2ca-9dec-491e-9006-8b6e2e40b225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.887 2 DEBUG nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:44 np0005466030 nova_compute[230518]: 2025-10-02 12:54:44.889 2 WARNING nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.919 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57ec8569-e4fb-40f7-a3d0-802225415d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.921 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[322fb02d-9d53-4f22-9036-8c7ab1ab4ff4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.936 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[759df8f8-cac0-407a-b9ba-0e155bcba0c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751256, 'reachable_time': 37246, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288667, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:44 np0005466030 systemd[1]: run-netns-ovnmeta\x2da39243cb\x2d5286\x2d4429\x2d8879\x2d7b4d535de128.mount: Deactivated successfully.
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.939 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:54:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.939 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[764c2134-31c3-4fb2-8155-51f9197e85c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.272 2 INFO nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.276 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance destroyed successfully.#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.277 2 DEBUG nova.virt.libvirt.vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:38Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.277 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.278 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.278 2 DEBUG os_vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20204810-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.284 2 INFO os_vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.288 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.288 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.508 2 DEBUG neutronclient.v2_0.client [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 20204810-ff47-450e-80e5-23d03b435455 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.644 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.644 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:45 np0005466030 nova_compute[230518]: 2025-10-02 12:54:45.644 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:46 np0005466030 nova_compute[230518]: 2025-10-02 12:54:46.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.032 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.032 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.032 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.033 2 WARNING nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.034 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.034 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:47 np0005466030 nova_compute[230518]: 2025-10-02 12:54:47.034 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:54:48 np0005466030 nova_compute[230518]: 2025-10-02 12:54:48.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409673.2091131, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:48 np0005466030 nova_compute[230518]: 2025-10-02 12:54:48.210 2 INFO nova.compute.manager [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:54:48 np0005466030 nova_compute[230518]: 2025-10-02 12:54:48.239 2 DEBUG nova.compute.manager [None req-2633ffeb-9291-49f2-9146-50aa78471ff8 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:48 np0005466030 nova_compute[230518]: 2025-10-02 12:54:48.613 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:54:48 np0005466030 nova_compute[230518]: 2025-10-02 12:54:48.614 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:54:48 np0005466030 nova_compute[230518]: 2025-10-02 12:54:48.631 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:54:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:49 np0005466030 nova_compute[230518]: 2025-10-02 12:54:49.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:49 np0005466030 nova_compute[230518]: 2025-10-02 12:54:49.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Oct  2 08:54:49 np0005466030 nova_compute[230518]: 2025-10-02 12:54:49.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005466030 nova_compute[230518]: 2025-10-02 12:54:50.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:50.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:50.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:51 np0005466030 nova_compute[230518]: 2025-10-02 12:54:51.624 2 DEBUG nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:51 np0005466030 nova_compute[230518]: 2025-10-02 12:54:51.625 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:51 np0005466030 nova_compute[230518]: 2025-10-02 12:54:51.625 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:51 np0005466030 nova_compute[230518]: 2025-10-02 12:54:51.625 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:51 np0005466030 nova_compute[230518]: 2025-10-02 12:54:51.626 2 DEBUG nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:51 np0005466030 nova_compute[230518]: 2025-10-02 12:54:51.626 2 WARNING nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 08:54:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:52.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:53 np0005466030 nova_compute[230518]: 2025-10-02 12:54:53.957 2 DEBUG nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:53 np0005466030 nova_compute[230518]: 2025-10-02 12:54:53.957 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:53 np0005466030 nova_compute[230518]: 2025-10-02 12:54:53.958 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:53 np0005466030 nova_compute[230518]: 2025-10-02 12:54:53.958 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:53 np0005466030 nova_compute[230518]: 2025-10-02 12:54:53.958 2 DEBUG nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:53 np0005466030 nova_compute[230518]: 2025-10-02 12:54:53.958 2 WARNING nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:54:54 np0005466030 nova_compute[230518]: 2025-10-02 12:54:54.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:54.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:55 np0005466030 nova_compute[230518]: 2025-10-02 12:54:55.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:56 np0005466030 nova_compute[230518]: 2025-10-02 12:54:56.340 2 DEBUG nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:56 np0005466030 nova_compute[230518]: 2025-10-02 12:54:56.341 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:56 np0005466030 nova_compute[230518]: 2025-10-02 12:54:56.341 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:56 np0005466030 nova_compute[230518]: 2025-10-02 12:54:56.342 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:56 np0005466030 nova_compute[230518]: 2025-10-02 12:54:56.342 2 DEBUG nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:56 np0005466030 nova_compute[230518]: 2025-10-02 12:54:56.342 2 WARNING nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:54:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:56.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:57 np0005466030 nova_compute[230518]: 2025-10-02 12:54:57.101 2 INFO nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Swapping old allocation on dict_keys(['730da6ce-9754-46f0-88e3-0019d056443f']) held by migration 967e2439-1b81-4fe0-baf4-48b7e3d12a87 for instance#033[00m
Oct  2 08:54:57 np0005466030 nova_compute[230518]: 2025-10-02 12:54:57.130 2 DEBUG nova.scheduler.client.report [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Overwriting current allocation {'allocations': {'f694d536-1dcd-4bb3-8516-534a40cdf6d7': {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}, 'generation': 72}}, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'consumer_generation': 1} on consumer a1e0932b-16b6-46b9-8192-b89b91e91802 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Oct  2 08:54:58 np0005466030 nova_compute[230518]: 2025-10-02 12:54:58.212 2 INFO nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating port 20204810-ff47-450e-80e5-23d03b435455 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 08:54:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:54:58 np0005466030 nova_compute[230518]: 2025-10-02 12:54:58.451 2 DEBUG nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:58 np0005466030 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:54:58 np0005466030 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:54:58 np0005466030 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:54:58 np0005466030 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:54:58 np0005466030 nova_compute[230518]: 2025-10-02 12:54:58.453 2 WARNING nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.#033[00m
Oct  2 08:54:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:54:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:54:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:54:58 np0005466030 podman[288670]: 2025-10-02 12:54:58.813494951 +0000 UTC m=+0.059542689 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:54:58 np0005466030 podman[288669]: 2025-10-02 12:54:58.873210704 +0000 UTC m=+0.118120627 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.067 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.068 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.068 2 DEBUG nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.195 2 DEBUG nova.compute.manager [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.196 2 DEBUG nova.compute.manager [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.196 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.805 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409684.803514, a1e0932b-16b6-46b9-8192-b89b91e91802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.805 2 INFO nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.833 2 DEBUG nova.compute.manager [None req-3a05067b-1e59-472d-bb50-920355bcd72a - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.835 2 DEBUG nova.compute.manager [None req-3a05067b-1e59-472d-bb50-920355bcd72a - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:54:59 np0005466030 nova_compute[230518]: 2025-10-02 12:54:59.864 2 INFO nova.compute.manager [None req-3a05067b-1e59-472d-bb50-920355bcd72a - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Oct  2 08:55:00 np0005466030 nova_compute[230518]: 2025-10-02 12:55:00.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:00.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:00.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:01 np0005466030 nova_compute[230518]: 2025-10-02 12:55:01.403 2 DEBUG nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:01 np0005466030 nova_compute[230518]: 2025-10-02 12:55:01.431 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:01 np0005466030 nova_compute[230518]: 2025-10-02 12:55:01.432 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Oct  2 08:55:01 np0005466030 nova_compute[230518]: 2025-10-02 12:55:01.463 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:01 np0005466030 nova_compute[230518]: 2025-10-02 12:55:01.464 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:55:01 np0005466030 nova_compute[230518]: 2025-10-02 12:55:01.508 2 DEBUG nova.storage.rbd_utils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rolling back rbd image(a1e0932b-16b6-46b9-8192-b89b91e91802_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Oct  2 08:55:01 np0005466030 nova_compute[230518]: 2025-10-02 12:55:01.658 2 DEBUG nova.storage.rbd_utils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] removing snapshot(nova-resize) on rbd image(a1e0932b-16b6-46b9-8192-b89b91e91802_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 08:55:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.157 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start _get_guest_xml network_info=[{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.160 2 WARNING nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.166 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.167 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.178 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.179 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.180 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.180 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.181 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.181 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.182 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.184 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.184 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.184 2 DEBUG nova.objects.instance [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.202 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:02.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3427044163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.616 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:02 np0005466030 nova_compute[230518]: 2025-10-02 12:55:02.657 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:02.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1586802724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.071 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.073 2 DEBUG nova.virt.libvirt.vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:53Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.073 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.074 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.076 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <uuid>a1e0932b-16b6-46b9-8192-b89b91e91802</uuid>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <name>instance-00000092</name>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1723654799</nova:name>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:55:02</nova:creationTime>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <nova:port uuid="20204810-ff47-450e-80e5-23d03b435455">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <entry name="serial">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <entry name="uuid">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:5b:41:1c"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <target dev="tap20204810-ff"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log" append="off"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:55:03 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:55:03 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:55:03 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:55:03 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Preparing to wait for external event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.079 2 DEBUG nova.virt.libvirt.vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:53Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.079 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.080 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.080 2 DEBUG os_vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20204810-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20204810-ff, col_values=(('external_ids', {'iface-id': '20204810-ff47-450e-80e5-23d03b435455', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:41:1c', 'vm-uuid': 'a1e0932b-16b6-46b9-8192-b89b91e91802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.0881] manager: (tap20204810-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.092 2 INFO os_vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.148 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.148 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.168 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:03 np0005466030 kernel: tap20204810-ff: entered promiscuous mode
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.2986] manager: (tap20204810-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:03Z|00620|binding|INFO|Claiming lport 20204810-ff47-450e-80e5-23d03b435455 for this chassis.
Oct  2 08:55:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:03Z|00621|binding|INFO|20204810-ff47-450e-80e5-23d03b435455: Claiming fa:16:3e:5b:41:1c 10.100.0.7
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.3187] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.3192] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.325 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.326 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 bound to our chassis#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.328 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a39243cb-5286-4429-8879-7b4d535de128#033[00m
Oct  2 08:55:03 np0005466030 systemd-machined[188247]: New machine qemu-72-instance-00000092.
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.342 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2db1d66b-4c12-4625-a57d-011353d34289]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.343 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa39243cb-51 in ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.345 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa39243cb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.345 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91fb9d62-1280-45b1-a92e-ef6862e812d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.345 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[867b0925-5acd-4f6b-91a6-dab1382fc34d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.358 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[97507291-a7ab-4525-b51a-ff30c06d2f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 systemd[1]: Started Virtual Machine qemu-72-instance-00000092.
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.385 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f53d482-1906-4e17-9bc3-e5b920f5f693]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 systemd-udevd[288847]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.4061] device (tap20204810-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.4071] device (tap20204810-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.424 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1cec32e9-6ed5-40ad-a010-238e7fb68505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.4454] manager: (tapa39243cb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.445 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[895f5148-c7a5-446b-b4a2-5b3deffa0192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.483 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bd20b8e3-49ac-4d69-aaad-3aa02530c734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.487 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cddbe4d0-4411-4269-a709-46c92b835d77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.5085] device (tapa39243cb-50): carrier: link connected
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.514 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a9838f-b7fc-41d2-8450-e852ed933fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7300e330-50cb-437c-8e79-04c6decb9a41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756706, 'reachable_time': 28614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288877, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e7c963-f142-4ae6-98ea-75e429779e0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:6ac8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 756706, 'tstamp': 756706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288878, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a740d79e-e9b7-47eb-ada6-794897405949]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756706, 'reachable_time': 28614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288879, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:03Z|00622|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 ovn-installed in OVS
Oct  2 08:55:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:03Z|00623|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 up in Southbound
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7a52c270-ca2c-4f33-807e-3025201b8fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.637 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f4c9b3-27f5-428d-92c0-1d53996b0290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.638 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.638 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.639 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa39243cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 kernel: tapa39243cb-50: entered promiscuous mode
Oct  2 08:55:03 np0005466030 NetworkManager[44960]: <info>  [1759409703.6411] manager: (tapa39243cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.643 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa39243cb-50, col_values=(('external_ids', {'iface-id': '75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:03Z|00624|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct  2 08:55:03 np0005466030 nova_compute[230518]: 2025-10-02 12:55:03.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.659 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.659 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8441d5e2-3c20-4a7b-9010-a7bee41025f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.660 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-a39243cb-5286-4429-8879-7b4d535de128
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID a39243cb-5286-4429-8879-7b4d535de128
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:55:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.661 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'env', 'PROCESS_TAG=haproxy-a39243cb-5286-4429-8879-7b4d535de128', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a39243cb-5286-4429-8879-7b4d535de128.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:55:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:04 np0005466030 podman[288918]: 2025-10-02 12:55:03.980893507 +0000 UTC m=+0.023427257 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:55:04 np0005466030 podman[288918]: 2025-10-02 12:55:04.398355624 +0000 UTC m=+0.440889374 container create 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 08:55:04 np0005466030 systemd[1]: Started libpod-conmon-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383.scope.
Oct  2 08:55:04 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:55:04 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9e319b30d1b4061deb78f71d6770d855f65959fcfb953fc9732e0288e8dbf3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.529 2 DEBUG nova.compute.manager [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.529 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.530 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.530 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.530 2 DEBUG nova.compute.manager [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Processing event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:55:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:04.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:04 np0005466030 podman[288918]: 2025-10-02 12:55:04.587711044 +0000 UTC m=+0.630244824 container init 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:55:04 np0005466030 podman[288918]: 2025-10-02 12:55:04.59426983 +0000 UTC m=+0.636803580 container start 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.619 2 DEBUG nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:55:04 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : New worker (288975) forked
Oct  2 08:55:04 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : Loading success.
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.621 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409704.619002, a1e0932b-16b6-46b9-8192-b89b91e91802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.622 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Started (Lifecycle Event)#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.632 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance running successfully.#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.633 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.646 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.649 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.675 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.675 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409704.6199708, a1e0932b-16b6-46b9-8192-b89b91e91802 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.676 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.704 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.709 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409704.6275783, a1e0932b-16b6-46b9-8192-b89b91e91802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.709 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.721 2 INFO nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance to original state: 'active'#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.730 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.734 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:04 np0005466030 nova_compute[230518]: 2025-10-02 12:55:04.770 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Oct  2 08:55:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:06.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:06 np0005466030 nova_compute[230518]: 2025-10-02 12:55:06.907 2 DEBUG nova.compute.manager [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:06 np0005466030 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG oslo_concurrency.lockutils [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:06 np0005466030 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG oslo_concurrency.lockutils [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:06 np0005466030 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG oslo_concurrency.lockutils [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:06 np0005466030 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG nova.compute.manager [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:06 np0005466030 nova_compute[230518]: 2025-10-02 12:55:06.909 2 WARNING nova.compute.manager [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:55:07 np0005466030 nova_compute[230518]: 2025-10-02 12:55:07.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:08 np0005466030 nova_compute[230518]: 2025-10-02 12:55:08.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:08.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:08 np0005466030 podman[288985]: 2025-10-02 12:55:08.811436705 +0000 UTC m=+0.059374014 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:55:08 np0005466030 podman[288984]: 2025-10-02 12:55:08.834610911 +0000 UTC m=+0.085367848 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.108 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2393667972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.590 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.842 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:55:09 np0005466030 nova_compute[230518]: 2025-10-02 12:55:09.843 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.026 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.027 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4227MB free_disk=20.87628173828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.028 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.028 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.327 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a1e0932b-16b6-46b9-8192-b89b91e91802 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.328 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.328 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.365 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2956126067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.804 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.810 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:10 np0005466030 nova_compute[230518]: 2025-10-02 12:55:10.893 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:11 np0005466030 nova_compute[230518]: 2025-10-02 12:55:11.009 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:55:11 np0005466030 nova_compute[230518]: 2025-10-02 12:55:11.010 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:11 np0005466030 nova_compute[230518]: 2025-10-02 12:55:11.920 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:11 np0005466030 nova_compute[230518]: 2025-10-02 12:55:11.921 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:11 np0005466030 nova_compute[230518]: 2025-10-02 12:55:11.963 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.067 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.067 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.082 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.082 2 INFO nova.compute.claims [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.258 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:12.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2576941608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.677 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.684 2 DEBUG nova.compute.provider_tree [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.767 2 DEBUG nova.scheduler.client.report [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.824 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.824 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:55:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.891 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.892 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.920 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:55:12 np0005466030 nova_compute[230518]: 2025-10-02 12:55:12.954 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.036 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.037 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.038 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Creating image(s)#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.084 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.110 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.135 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.139 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.220 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.221 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.222 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.222 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.247 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.251 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2b371444-62dc-4270-8164-64eac7dcead4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:13 np0005466030 nova_compute[230518]: 2025-10-02 12:55:13.285 2 DEBUG nova.policy [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ea4224783c14b01bd0ff8988a45a5f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b48381b3787c4f3d9bb0c9050cf4c52c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:55:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.011 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.012 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.012 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.056 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.056 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:14.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.784 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2b371444-62dc-4270-8164-64eac7dcead4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:14 np0005466030 nova_compute[230518]: 2025-10-02 12:55:14.860 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] resizing rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:55:15 np0005466030 nova_compute[230518]: 2025-10-02 12:55:15.178 2 DEBUG nova.objects.instance [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lazy-loading 'migration_context' on Instance uuid 2b371444-62dc-4270-8164-64eac7dcead4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:15 np0005466030 nova_compute[230518]: 2025-10-02 12:55:15.192 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:55:15 np0005466030 nova_compute[230518]: 2025-10-02 12:55:15.193 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Ensure instance console log exists: /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:55:15 np0005466030 nova_compute[230518]: 2025-10-02 12:55:15.194 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:15 np0005466030 nova_compute[230518]: 2025-10-02 12:55:15.194 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:15 np0005466030 nova_compute[230518]: 2025-10-02 12:55:15.194 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:16.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:16 np0005466030 nova_compute[230518]: 2025-10-02 12:55:16.634 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Successfully created port: 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:55:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:17 np0005466030 nova_compute[230518]: 2025-10-02 12:55:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:18 np0005466030 nova_compute[230518]: 2025-10-02 12:55:18.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:18 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:18Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:41:1c 10.100.0.7
Oct  2 08:55:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:18.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:18.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:18 np0005466030 nova_compute[230518]: 2025-10-02 12:55:18.861 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Successfully updated port: 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:55:18 np0005466030 nova_compute[230518]: 2025-10-02 12:55:18.887 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:18 np0005466030 nova_compute[230518]: 2025-10-02 12:55:18.887 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquired lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:18 np0005466030 nova_compute[230518]: 2025-10-02 12:55:18.887 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:55:19 np0005466030 nova_compute[230518]: 2025-10-02 12:55:19.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:19 np0005466030 nova_compute[230518]: 2025-10-02 12:55:19.207 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.246 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updating instance_info_cache with network_info: [{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.261 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Releasing lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.261 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance network_info: |[{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.264 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start _get_guest_xml network_info=[{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.267 2 WARNING nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.274 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.274 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.276 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.277 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.278 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.278 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.278 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.283 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1742637085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.823 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.857 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.862 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.893 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-changed-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.894 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Refreshing instance network info cache due to event network-changed-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.894 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.895 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:20 np0005466030 nova_compute[230518]: 2025-10-02 12:55:20.895 2 DEBUG nova.network.neutron [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Refreshing network info cache for port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:55:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3610127590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.292 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.293 2 DEBUG nova.virt.libvirt.vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-861404210',display_name='tempest-ServerMetadataTestJSON-server-861404210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-861404210',id=150,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b48381b3787c4f3d9bb0c9050cf4c52c',ramdisk_id='',reservation_id='r-jg3xbq5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-459689577',owner_user_name='tempest-ServerMetadataTestJSON-459689577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:12Z,user_data=None,user_id='9ea4224783c14b01bd0ff8988a45a5f2',uuid=2b371444-62dc-4270-8164-64eac7dcead4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.294 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converting VIF {"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.295 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.296 2 DEBUG nova.objects.instance [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b371444-62dc-4270-8164-64eac7dcead4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.310 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <uuid>2b371444-62dc-4270-8164-64eac7dcead4</uuid>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <name>instance-00000096</name>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerMetadataTestJSON-server-861404210</nova:name>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:55:20</nova:creationTime>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:user uuid="9ea4224783c14b01bd0ff8988a45a5f2">tempest-ServerMetadataTestJSON-459689577-project-member</nova:user>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:project uuid="b48381b3787c4f3d9bb0c9050cf4c52c">tempest-ServerMetadataTestJSON-459689577</nova:project>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <nova:port uuid="8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <entry name="serial">2b371444-62dc-4270-8164-64eac7dcead4</entry>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <entry name="uuid">2b371444-62dc-4270-8164-64eac7dcead4</entry>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/2b371444-62dc-4270-8164-64eac7dcead4_disk">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/2b371444-62dc-4270-8164-64eac7dcead4_disk.config">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:68:de:52"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <target dev="tap8d31e365-a7"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/console.log" append="off"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:55:21 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:55:21 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:55:21 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:55:21 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.311 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Preparing to wait for external event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.312 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.312 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.312 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.313 2 DEBUG nova.virt.libvirt.vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-861404210',display_name='tempest-ServerMetadataTestJSON-server-861404210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-861404210',id=150,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b48381b3787c4f3d9bb0c9050cf4c52c',ramdisk_id='',reservation_id='r-jg3xbq5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-459689577',owner_user_name='tempest-ServerMetadataTestJSON-459689577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:12Z,user_data=None,user_id='9ea4224783c14b01bd0ff8988a45a5f2',uuid=2b371444-62dc-4270-8164-64eac7dcead4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.313 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converting VIF {"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.314 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.314 2 DEBUG os_vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d31e365-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d31e365-a7, col_values=(('external_ids', {'iface-id': '8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:de:52', 'vm-uuid': '2b371444-62dc-4270-8164-64eac7dcead4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:21 np0005466030 NetworkManager[44960]: <info>  [1759409721.3244] manager: (tap8d31e365-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.335 2 INFO os_vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7')#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.554 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.554 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.555 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] No VIF found with MAC fa:16:3e:68:de:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.555 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Using config drive#033[00m
Oct  2 08:55:21 np0005466030 nova_compute[230518]: 2025-10-02 12:55:21.883 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 08:55:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 08:55:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.231 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Creating config drive at /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.236 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_gklfp1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.264 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.264 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.265 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.372 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_gklfp1" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.404 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.409 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config 2b371444-62dc-4270-8164-64eac7dcead4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.771 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config 2b371444-62dc-4270-8164-64eac7dcead4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.772 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deleting local config drive /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config because it was imported into RBD.#033[00m
Oct  2 08:55:23 np0005466030 kernel: tap8d31e365-a7: entered promiscuous mode
Oct  2 08:55:23 np0005466030 NetworkManager[44960]: <info>  [1759409723.8437] manager: (tap8d31e365-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:23Z|00625|binding|INFO|Claiming lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for this chassis.
Oct  2 08:55:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:23Z|00626|binding|INFO|8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4: Claiming fa:16:3e:68:de:52 10.100.0.7
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.853 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:de:52 10.100.0.7'], port_security=['fa:16:3e:68:de:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b371444-62dc-4270-8164-64eac7dcead4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f266165c-cf86-4062-8010-5a7ecdec1578', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b48381b3787c4f3d9bb0c9050cf4c52c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2c33b6f9-8dd0-4521-bcdc-04a49781adff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a527214-80f6-4bea-aba9-1aa4b53a782b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.854 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 in datapath f266165c-cf86-4062-8010-5a7ecdec1578 bound to our chassis#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.856 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f266165c-cf86-4062-8010-5a7ecdec1578#033[00m
Oct  2 08:55:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:23Z|00627|binding|INFO|Setting lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 ovn-installed in OVS
Oct  2 08:55:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:23Z|00628|binding|INFO|Setting lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 up in Southbound
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.868 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffeea0f1-92da-4486-80e8-14a333831382]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.869 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf266165c-c1 in ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.871 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf266165c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04402ec6-bf35-408b-8299-d055304657e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 nova_compute[230518]: 2025-10-02 12:55:23.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[245a0de5-7541-44ad-8bc2-d9f768578caf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 systemd-udevd[289393]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:23 np0005466030 systemd-machined[188247]: New machine qemu-73-instance-00000096.
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.892 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9179410c-8a9b-4fc8-9c64-1100afea9038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 NetworkManager[44960]: <info>  [1759409723.8947] device (tap8d31e365-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:55:23 np0005466030 NetworkManager[44960]: <info>  [1759409723.8958] device (tap8d31e365-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:55:23 np0005466030 systemd[1]: Started Virtual Machine qemu-73-instance-00000096.
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.913 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c807042-2d1c-4730-bafd-ac60316fd6b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.945 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f72328-333d-4c74-8b4f-06d0677369d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 NetworkManager[44960]: <info>  [1759409723.9530] manager: (tapf266165c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.953 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc8bc94-2f68-468a-a0d3-4c8c63178f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.986 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3de9419b-0f9e-43eb-afca-2c06979d8897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.990 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c95cb835-ff7e-464c-be8d-819ee6036fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 NetworkManager[44960]: <info>  [1759409724.0162] device (tapf266165c-c0): carrier: link connected
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.021 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2839345a-8b71-4f09-89d5-0a614790c521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.039 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9263897e-4f35-4467-8582-769c5ac63cbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf266165c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:52:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758757, 'reachable_time': 33756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289426, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.063 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92375c7b-7bb3-4534-b73f-4cf2f3f955e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:5225'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758757, 'tstamp': 758757}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289427, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.084 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2943fef-2ec9-40d6-9054-9a1e6b2ee5b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf266165c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:52:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758757, 'reachable_time': 33756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289428, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dd97d4b3-31d5-49d8-82f0-1bb706631e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.131 2 INFO nova.compute.manager [None req-6815fa79-2f16-426f-86ad-c4f394a9715a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Get console output#033[00m
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.137 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.201 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b86c96bb-ff37-4eea-a325-df3142ea38c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.203 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf266165c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.203 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.204 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf266165c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:24 np0005466030 NetworkManager[44960]: <info>  [1759409724.2061] manager: (tapf266165c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Oct  2 08:55:24 np0005466030 kernel: tapf266165c-c0: entered promiscuous mode
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.209 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf266165c-c0, col_values=(('external_ids', {'iface-id': 'efec5d16-2b0c-4379-bb94-6e89b3bd1faf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:24Z|00629|binding|INFO|Releasing lport efec5d16-2b0c-4379-bb94-6e89b3bd1faf from this chassis (sb_readonly=0)
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.216 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f266165c-cf86-4062-8010-5a7ecdec1578.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f266165c-cf86-4062-8010-5a7ecdec1578.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.218 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c72eab6c-7bfe-4992-a737-c4c10baf97ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.219 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-f266165c-cf86-4062-8010-5a7ecdec1578
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/f266165c-cf86-4062-8010-5a7ecdec1578.pid.haproxy
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID f266165c-cf86-4062-8010-5a7ecdec1578
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:55:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.219 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'env', 'PROCESS_TAG=haproxy-f266165c-cf86-4062-8010-5a7ecdec1578', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f266165c-cf86-4062-8010-5a7ecdec1578.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:55:24 np0005466030 nova_compute[230518]: 2025-10-02 12:55:24.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:24.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:24 np0005466030 podman[289474]: 2025-10-02 12:55:24.586953287 +0000 UTC m=+0.024811169 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:55:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:24.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:25 np0005466030 podman[289474]: 2025-10-02 12:55:25.249848745 +0000 UTC m=+0.687706627 container create c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.260 2 DEBUG nova.network.neutron [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updated VIF entry in instance network info cache for port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.260 2 DEBUG nova.network.neutron [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updating instance_info_cache with network_info: [{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.278 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:25 np0005466030 systemd[1]: Started libpod-conmon-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c.scope.
Oct  2 08:55:25 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.321 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409725.320647, 2b371444-62dc-4270-8164-64eac7dcead4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:25 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcf7a3e9396fcd78c2400963121561ebe3f42a3433196532cd89ee9d7c19a83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.322 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Started (Lifecycle Event)#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.351 2 DEBUG nova.compute.manager [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.351 2 DEBUG oslo_concurrency.lockutils [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.352 2 DEBUG oslo_concurrency.lockutils [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.352 2 DEBUG oslo_concurrency.lockutils [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.352 2 DEBUG nova.compute.manager [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Processing event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.353 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.355 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.357 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.360 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.362 2 INFO nova.virt.libvirt.driver [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance spawned successfully.#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.362 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.389 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.389 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.390 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.390 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.390 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.391 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:25 np0005466030 podman[289474]: 2025-10-02 12:55:25.392684985 +0000 UTC m=+0.830542897 container init c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.396 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.396 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409725.320851, 2b371444-62dc-4270-8164-64eac7dcead4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.397 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:55:25 np0005466030 podman[289474]: 2025-10-02 12:55:25.398483237 +0000 UTC m=+0.836341119 container start c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:55:25 np0005466030 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : New worker (289524) forked
Oct  2 08:55:25 np0005466030 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : Loading success.
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.424 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.428 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409725.3565223, 2b371444-62dc-4270-8164-64eac7dcead4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.429 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.451 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.457 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.460 2 INFO nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 12.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.460 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.490 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.597 2 INFO nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 13.54 seconds to build instance.#033[00m
Oct  2 08:55:25 np0005466030 nova_compute[230518]: 2025-10-02 12:55:25.613 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:25.952 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:25.952 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:26.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.822 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.823 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.823 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.824 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.824 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.825 2 INFO nova.compute.manager [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Terminating instance#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.826 2 DEBUG nova.compute.manager [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:55:26 np0005466030 kernel: tap20204810-ff (unregistering): left promiscuous mode
Oct  2 08:55:26 np0005466030 NetworkManager[44960]: <info>  [1759409726.9596] device (tap20204810-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:26Z|00630|binding|INFO|Releasing lport 20204810-ff47-450e-80e5-23d03b435455 from this chassis (sb_readonly=0)
Oct  2 08:55:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:26Z|00631|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 down in Southbound
Oct  2 08:55:26 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:26Z|00632|binding|INFO|Removing iface tap20204810-ff ovn-installed in OVS
Oct  2 08:55:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.976 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.977 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 unbound from our chassis#033[00m
Oct  2 08:55:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.979 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a39243cb-5286-4429-8879-7b4d535de128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:55:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.980 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f682c31b-524b-42f7-8502-f50751377aea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.981 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace which is not needed anymore#033[00m
Oct  2 08:55:26 np0005466030 nova_compute[230518]: 2025-10-02 12:55:26.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005466030 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct  2 08:55:27 np0005466030 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000092.scope: Consumed 13.883s CPU time.
Oct  2 08:55:27 np0005466030 systemd-machined[188247]: Machine qemu-72-instance-00000092 terminated.
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.063 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance destroyed successfully.#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.063 2 DEBUG nova.objects.instance [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.085 2 DEBUG nova.compute.manager [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.086 2 DEBUG nova.compute.manager [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.087 2 DEBUG oslo_concurrency.lockutils [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.098 2 DEBUG nova.virt.libvirt.vif [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:04Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.099 2 DEBUG nova.network.os_vif_util [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.100 2 DEBUG nova.network.os_vif_util [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.100 2 DEBUG os_vif [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20204810-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.112 2 INFO os_vif [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.135 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.158 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.159 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.160 2 DEBUG oslo_concurrency.lockutils [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.160 2 DEBUG nova.network.neutron [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:55:27 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : haproxy version is 2.8.14-c23fe91
Oct  2 08:55:27 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : path to executable is /usr/sbin/haproxy
Oct  2 08:55:27 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [WARNING]  (288973) : Exiting Master process...
Oct  2 08:55:27 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [ALERT]    (288973) : Current worker (288975) exited with code 143 (Terminated)
Oct  2 08:55:27 np0005466030 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [WARNING]  (288973) : All workers exited. Exiting... (0)
Oct  2 08:55:27 np0005466030 systemd[1]: libpod-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383.scope: Deactivated successfully.
Oct  2 08:55:27 np0005466030 podman[289567]: 2025-10-02 12:55:27.432225093 +0000 UTC m=+0.328412565 container died 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.515 2 DEBUG nova.compute.manager [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.517 2 DEBUG oslo_concurrency.lockutils [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.517 2 DEBUG oslo_concurrency.lockutils [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.518 2 DEBUG oslo_concurrency.lockutils [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.519 2 DEBUG nova.compute.manager [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] No waiting events found dispatching network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:27 np0005466030 nova_compute[230518]: 2025-10-02 12:55:27.519 2 WARNING nova.compute.manager [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received unexpected event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:55:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay-1d9e319b30d1b4061deb78f71d6770d855f65959fcfb953fc9732e0288e8dbf3-merged.mount: Deactivated successfully.
Oct  2 08:55:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383-userdata-shm.mount: Deactivated successfully.
Oct  2 08:55:28 np0005466030 podman[289567]: 2025-10-02 12:55:28.233577333 +0000 UTC m=+1.129764805 container cleanup 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:55:28 np0005466030 systemd[1]: libpod-conmon-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383.scope: Deactivated successfully.
Oct  2 08:55:28 np0005466030 podman[289614]: 2025-10-02 12:55:28.617044214 +0000 UTC m=+0.358238171 container remove 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 08:55:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:28.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.624 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[529b3fb4-dfff-452c-8deb-23174e01490d]: (4, ('Thu Oct  2 12:55:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383)\n0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383\nThu Oct  2 12:55:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383)\n0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.625 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48a3883b-3447-4d45-8350-7dda49c1693b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.626 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:28 np0005466030 nova_compute[230518]: 2025-10-02 12:55:28.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:28 np0005466030 kernel: tapa39243cb-50: left promiscuous mode
Oct  2 08:55:28 np0005466030 nova_compute[230518]: 2025-10-02 12:55:28.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.651 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[145316b7-bddd-4a2a-9c04-afde2d638dc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.679 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[825f1f3e-2740-4c4a-bf9a-796545f8ac9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6ed287-3a32-405f-8c17-8e8242f28ee8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.697 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5958d1a0-4d6c-4e00-b766-adbcd381688c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756697, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289630, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.699 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:55:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.699 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[95401074-bab8-45b0-9f6a-210e09468347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:28 np0005466030 systemd[1]: run-netns-ovnmeta\x2da39243cb\x2d5286\x2d4429\x2d8879\x2d7b4d535de128.mount: Deactivated successfully.
Oct  2 08:55:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.231 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.231 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.232 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.232 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.233 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.233 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.233 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.234 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.234 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.234 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.235 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.235 2 WARNING nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.525 2 DEBUG nova.network.neutron [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.526 2 DEBUG nova.network.neutron [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.674 2 DEBUG oslo_concurrency.lockutils [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.692 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.693 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.693 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.694 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.694 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.695 2 INFO nova.compute.manager [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Terminating instance#033[00m
Oct  2 08:55:29 np0005466030 nova_compute[230518]: 2025-10-02 12:55:29.696 2 DEBUG nova.compute.manager [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:55:29 np0005466030 podman[289632]: 2025-10-02 12:55:29.795271318 +0000 UTC m=+0.050058842 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 08:55:29 np0005466030 podman[289631]: 2025-10-02 12:55:29.847052422 +0000 UTC m=+0.103020733 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Oct  2 08:55:30 np0005466030 kernel: tap8d31e365-a7 (unregistering): left promiscuous mode
Oct  2 08:55:30 np0005466030 NetworkManager[44960]: <info>  [1759409730.1725] device (tap8d31e365-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:55:30 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:30Z|00633|binding|INFO|Releasing lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 from this chassis (sb_readonly=0)
Oct  2 08:55:30 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:30Z|00634|binding|INFO|Setting lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 down in Southbound
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:30Z|00635|binding|INFO|Removing iface tap8d31e365-a7 ovn-installed in OVS
Oct  2 08:55:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.188 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:de:52 10.100.0.7'], port_security=['fa:16:3e:68:de:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b371444-62dc-4270-8164-64eac7dcead4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f266165c-cf86-4062-8010-5a7ecdec1578', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b48381b3787c4f3d9bb0c9050cf4c52c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2c33b6f9-8dd0-4521-bcdc-04a49781adff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a527214-80f6-4bea-aba9-1aa4b53a782b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.190 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 in datapath f266165c-cf86-4062-8010-5a7ecdec1578 unbound from our chassis#033[00m
Oct  2 08:55:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.191 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f266165c-cf86-4062-8010-5a7ecdec1578, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:55:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.192 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[396562a5-7aa0-4ea9-8c09-22f73e10644e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.193 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 namespace which is not needed anymore#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466030 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct  2 08:55:30 np0005466030 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000096.scope: Consumed 5.373s CPU time.
Oct  2 08:55:30 np0005466030 systemd-machined[188247]: Machine qemu-73-instance-00000096 terminated.
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.342 2 INFO nova.virt.libvirt.driver [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance destroyed successfully.#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.343 2 DEBUG nova.objects.instance [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lazy-loading 'resources' on Instance uuid 2b371444-62dc-4270-8164-64eac7dcead4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.360 2 DEBUG nova.virt.libvirt.vif [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-861404210',display_name='tempest-ServerMetadataTestJSON-server-861404210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-861404210',id=150,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b48381b3787c4f3d9bb0c9050cf4c52c',ramdisk_id='',reservation_id='r-jg3xbq5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-459689577',owner_user_name='tempest-ServerMetadataTestJSON-459689577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:29Z,user_data=None,user_id='9ea4224783c14b01bd0ff8988a45a5f2',uuid=2b371444-62dc-4270-8164-64eac7dcead4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.360 2 DEBUG nova.network.os_vif_util [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converting VIF {"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.361 2 DEBUG nova.network.os_vif_util [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.361 2 DEBUG os_vif [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.363 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d31e365-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:30 np0005466030 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : haproxy version is 2.8.14-c23fe91
Oct  2 08:55:30 np0005466030 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : path to executable is /usr/sbin/haproxy
Oct  2 08:55:30 np0005466030 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [WARNING]  (289522) : Exiting Master process...
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:55:30 np0005466030 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [ALERT]    (289522) : Current worker (289524) exited with code 143 (Terminated)
Oct  2 08:55:30 np0005466030 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [WARNING]  (289522) : All workers exited. Exiting... (0)
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.368 2 INFO os_vif [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7')#033[00m
Oct  2 08:55:30 np0005466030 systemd[1]: libpod-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c.scope: Deactivated successfully.
Oct  2 08:55:30 np0005466030 podman[289698]: 2025-10-02 12:55:30.375925384 +0000 UTC m=+0.102303631 container died c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.423 2 DEBUG nova.compute.manager [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-unplugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.424 2 DEBUG oslo_concurrency.lockutils [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.424 2 DEBUG oslo_concurrency.lockutils [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.425 2 DEBUG oslo_concurrency.lockutils [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.425 2 DEBUG nova.compute.manager [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] No waiting events found dispatching network-vif-unplugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:30 np0005466030 nova_compute[230518]: 2025-10-02 12:55:30.425 2 DEBUG nova.compute.manager [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-unplugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:55:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:30 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c-userdata-shm.mount: Deactivated successfully.
Oct  2 08:55:30 np0005466030 systemd[1]: var-lib-containers-storage-overlay-abcf7a3e9396fcd78c2400963121561ebe3f42a3433196532cd89ee9d7c19a83-merged.mount: Deactivated successfully.
Oct  2 08:55:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:30.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:30 np0005466030 podman[289698]: 2025-10-02 12:55:30.981635776 +0000 UTC m=+0.708014033 container cleanup c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:55:31 np0005466030 podman[289756]: 2025-10-02 12:55:31.789671777 +0000 UTC m=+0.784215544 container remove c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.795 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[70f7e999-3487-434f-a85e-5cb47bb87e9e]: (4, ('Thu Oct  2 12:55:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 (c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c)\nc2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c\nThu Oct  2 12:55:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 (c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c)\nc2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.796 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d0806c-72cb-4d87-89e2-a205badfb2d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.797 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf266165c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:31 np0005466030 nova_compute[230518]: 2025-10-02 12:55:31.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:31 np0005466030 kernel: tapf266165c-c0: left promiscuous mode
Oct  2 08:55:31 np0005466030 systemd[1]: libpod-conmon-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c.scope: Deactivated successfully.
Oct  2 08:55:31 np0005466030 nova_compute[230518]: 2025-10-02 12:55:31.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d202c39-4977-4ba9-9884-f51753f605e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.862 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[056f6391-bf30-4967-bdac-bf3cac471884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.863 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[720dee3d-165a-41e0-93e2-e5b249c72cfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93c00daf-87f5-40f6-8388-187bcb8c1fd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758749, 'reachable_time': 26056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289775, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.882 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:55:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.882 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c5d63d-0b57-4783-b51a-8a3a0e5c5361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:31 np0005466030 systemd[1]: run-netns-ovnmeta\x2df266165c\x2dcf86\x2d4062\x2d8010\x2d5a7ecdec1578.mount: Deactivated successfully.
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.485 2 INFO nova.virt.libvirt.driver [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deleting instance files /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802_del#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.486 2 INFO nova.virt.libvirt.driver [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deletion of /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802_del complete#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.502 2 DEBUG nova.compute.manager [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.502 2 DEBUG oslo_concurrency.lockutils [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.502 2 DEBUG oslo_concurrency.lockutils [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.503 2 DEBUG oslo_concurrency.lockutils [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.503 2 DEBUG nova.compute.manager [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] No waiting events found dispatching network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.503 2 WARNING nova.compute.manager [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received unexpected event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.566 2 INFO nova.compute.manager [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 5.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.567 2 DEBUG oslo.service.loopingcall [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.567 2 DEBUG nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:55:32 np0005466030 nova_compute[230518]: 2025-10-02 12:55:32.567 2 DEBUG nova.network.neutron [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:55:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:32.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:33 np0005466030 nova_compute[230518]: 2025-10-02 12:55:33.674 2 DEBUG nova.network.neutron [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:33 np0005466030 nova_compute[230518]: 2025-10-02 12:55:33.693 2 INFO nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 1.13 seconds to deallocate network for instance.#033[00m
Oct  2 08:55:33 np0005466030 nova_compute[230518]: 2025-10-02 12:55:33.740 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:33 np0005466030 nova_compute[230518]: 2025-10-02 12:55:33.741 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:33 np0005466030 nova_compute[230518]: 2025-10-02 12:55:33.849 2 DEBUG oslo_concurrency.processutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:33 np0005466030 nova_compute[230518]: 2025-10-02 12:55:33.897 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-deleted-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.216 2 INFO nova.virt.libvirt.driver [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deleting instance files /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4_del#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.217 2 INFO nova.virt.libvirt.driver [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deletion of /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4_del complete#033[00m
Oct  2 08:55:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2393714639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.281 2 INFO nova.compute.manager [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 4.58 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.281 2 DEBUG oslo.service.loopingcall [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.282 2 DEBUG nova.compute.manager [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.282 2 DEBUG nova.network.neutron [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.291 2 DEBUG oslo_concurrency.processutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.299 2 DEBUG nova.compute.provider_tree [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.314 2 DEBUG nova.scheduler.client.report [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.338 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.365 2 INFO nova.scheduler.client.report [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance a1e0932b-16b6-46b9-8192-b89b91e91802#033[00m
Oct  2 08:55:34 np0005466030 nova_compute[230518]: 2025-10-02 12:55:34.434 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:55:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.132 2 DEBUG nova.network.neutron [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.155 2 INFO nova.compute.manager [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.197 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.197 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.239 2 DEBUG oslo_concurrency.processutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/428658818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.647 2 DEBUG oslo_concurrency.processutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.652 2 DEBUG nova.compute.provider_tree [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.674 2 DEBUG nova.scheduler.client.report [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.705 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.739 2 INFO nova.scheduler.client.report [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Deleted allocations for instance 2b371444-62dc-4270-8164-64eac7dcead4#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.853 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:35 np0005466030 nova_compute[230518]: 2025-10-02 12:55:35.977 2 DEBUG nova.compute.manager [req-091d7424-bdac-4774-a883-e2c7593572e9 req-c942aa55-e5a8-4da4-abb1-a132e3e17fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-deleted-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:36 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:55:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:55:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:38.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:39 np0005466030 nova_compute[230518]: 2025-10-02 12:55:39.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:55:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 48K writes, 191K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.75 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9335 writes, 37K keys, 9335 commit groups, 1.0 writes per commit group, ingest: 35.42 MB, 0.06 MB/s#012Interval WAL: 9335 writes, 3667 syncs, 2.55 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 08:55:39 np0005466030 nova_compute[230518]: 2025-10-02 12:55:39.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:39 np0005466030 podman[289953]: 2025-10-02 12:55:39.87507108 +0000 UTC m=+0.064934357 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 08:55:39 np0005466030 podman[289954]: 2025-10-02 12:55:39.87507075 +0000 UTC m=+0.063904966 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:55:39 np0005466030 nova_compute[230518]: 2025-10-02 12:55:39.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.747 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.747 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.775 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:55:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:40.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.877 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.878 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.888 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:55:40 np0005466030 nova_compute[230518]: 2025-10-02 12:55:40.888 2 INFO nova.compute.claims [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:55:41 np0005466030 nova_compute[230518]: 2025-10-02 12:55:41.000 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.065 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409727.0628552, a1e0932b-16b6-46b9-8192-b89b91e91802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.066 2 INFO nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.089 2 DEBUG nova.compute.manager [None req-cd82f65a-a959-45ea-ae85-75c27648e029 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1353320155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.145 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.155 2 DEBUG nova.compute.provider_tree [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.174 2 DEBUG nova.scheduler.client.report [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.288 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.289 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.410 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.411 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.492 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.519 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.632 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.633 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:55:42 np0005466030 nova_compute[230518]: 2025-10-02 12:55:42.633 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Creating image(s)#033[00m
Oct  2 08:55:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:42.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:42.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:43 np0005466030 nova_compute[230518]: 2025-10-02 12:55:43.088 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:43 np0005466030 nova_compute[230518]: 2025-10-02 12:55:43.267 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:43 np0005466030 nova_compute[230518]: 2025-10-02 12:55:43.997 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.002 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.047 2 DEBUG nova.policy [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.107 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.108 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.109 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.109 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.397 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:44 np0005466030 nova_compute[230518]: 2025-10-02 12:55:44.402 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:44.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:45 np0005466030 nova_compute[230518]: 2025-10-02 12:55:45.036 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Successfully created port: 8de06222-5603-4a49-ac47-7db15cbb7e03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:55:45 np0005466030 nova_compute[230518]: 2025-10-02 12:55:45.340 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409730.33983, 2b371444-62dc-4270-8164-64eac7dcead4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:45 np0005466030 nova_compute[230518]: 2025-10-02 12:55:45.341 2 INFO nova.compute.manager [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:55:45 np0005466030 nova_compute[230518]: 2025-10-02 12:55:45.364 2 DEBUG nova.compute.manager [None req-d18935fc-7f73-4697-8f6d-3865a60b7bd4 - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:45 np0005466030 nova_compute[230518]: 2025-10-02 12:55:45.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:55:45 np0005466030 nova_compute[230518]: 2025-10-02 12:55:45.991 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Successfully updated port: 8de06222-5603-4a49-ac47-7db15cbb7e03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:55:46 np0005466030 nova_compute[230518]: 2025-10-02 12:55:46.007 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:46 np0005466030 nova_compute[230518]: 2025-10-02 12:55:46.008 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:46 np0005466030 nova_compute[230518]: 2025-10-02 12:55:46.008 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:55:46 np0005466030 nova_compute[230518]: 2025-10-02 12:55:46.101 2 DEBUG nova.compute.manager [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:46 np0005466030 nova_compute[230518]: 2025-10-02 12:55:46.101 2 DEBUG nova.compute.manager [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing instance network info cache due to event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:55:46 np0005466030 nova_compute[230518]: 2025-10-02 12:55:46.102 2 DEBUG oslo_concurrency.lockutils [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:46 np0005466030 nova_compute[230518]: 2025-10-02 12:55:46.144 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:55:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:46.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:46.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:47.573 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:47 np0005466030 nova_compute[230518]: 2025-10-02 12:55:47.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:47.575 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.076 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.148 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.458 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.464 2 DEBUG nova.objects.instance [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.485 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.486 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Ensure instance console log exists: /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.486 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.486 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.487 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.490 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.491 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance network_info: |[{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.491 2 DEBUG oslo_concurrency.lockutils [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.491 2 DEBUG nova.network.neutron [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.493 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start _get_guest_xml network_info=[{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.497 2 WARNING nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.500 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.501 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.507 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.508 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.509 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.509 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.510 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.510 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.512 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.512 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.512 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.513 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:55:48 np0005466030 nova_compute[230518]: 2025-10-02 12:55:48.515 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:48.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/959533747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.092 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.119 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.123 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:55:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/153925789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.547 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.549 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872091800',display_name='tempest-TestNetworkBasicOps-server-1872091800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872091800',id=151,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSGOLZUJ3FrTIeEm0YCEFIRKta1oOKUh3K2cHX7D75D8mOr7z91wWb7O7IlUA8JdoZAVPTTXOOargDtG7eOD7M62+PfvZG7TnqVCQDZ9PY6Jtt6S6zET7dnTNArJzZa2A==',key_name='tempest-TestNetworkBasicOps-802197789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-50zfty89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cba9797d-d8c0-42bb-99e8-21ff3406d1ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.549 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.550 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.551 2 DEBUG nova.objects.instance [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.572 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <uuid>cba9797d-d8c0-42bb-99e8-21ff3406d1ff</uuid>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <name>instance-00000097</name>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkBasicOps-server-1872091800</nova:name>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:55:48</nova:creationTime>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <nova:port uuid="8de06222-5603-4a49-ac47-7db15cbb7e03">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <entry name="serial">cba9797d-d8c0-42bb-99e8-21ff3406d1ff</entry>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <entry name="uuid">cba9797d-d8c0-42bb-99e8-21ff3406d1ff</entry>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:68:46:e8"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <target dev="tap8de06222-56"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/console.log" append="off"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:55:49 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:55:49 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:55:49 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:55:49 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.573 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Preparing to wait for external event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.576 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872091800',display_name='tempest-TestNetworkBasicOps-server-1872091800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872091800',id=151,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSGOLZUJ3FrTIeEm0YCEFIRKta1oOKUh3K2cHX7D75D8mOr7z91wWb7O7IlUA8JdoZAVPTTXOOargDtG7eOD7M62+PfvZG7TnqVCQDZ9PY6Jtt6S6zET7dnTNArJzZa2A==',key_name='tempest-TestNetworkBasicOps-802197789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-50zfty89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cba9797d-d8c0-42bb-99e8-21ff3406d1ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.576 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.577 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.577 2 DEBUG os_vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8de06222-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8de06222-56, col_values=(('external_ids', {'iface-id': '8de06222-5603-4a49-ac47-7db15cbb7e03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:46:e8', 'vm-uuid': 'cba9797d-d8c0-42bb-99e8-21ff3406d1ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:49 np0005466030 NetworkManager[44960]: <info>  [1759409749.5847] manager: (tap8de06222-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.590 2 INFO os_vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56')#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.732 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.732 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.732 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:68:46:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:55:49 np0005466030 nova_compute[230518]: 2025-10-02 12:55:49.733 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Using config drive#033[00m
Oct  2 08:55:50 np0005466030 nova_compute[230518]: 2025-10-02 12:55:50.087 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:50.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.288 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Creating config drive at /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config#033[00m
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.295 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4dycmxkc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.338 2 DEBUG nova.network.neutron [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updated VIF entry in instance network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.339 2 DEBUG nova.network.neutron [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.358 2 DEBUG oslo_concurrency.lockutils [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.439 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4dycmxkc" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.573 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:51 np0005466030 nova_compute[230518]: 2025-10-02 12:55:51.577 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.001 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.002 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deleting local config drive /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config because it was imported into RBD.#033[00m
Oct  2 08:55:52 np0005466030 kernel: tap8de06222-56: entered promiscuous mode
Oct  2 08:55:52 np0005466030 NetworkManager[44960]: <info>  [1759409752.0609] manager: (tap8de06222-56): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:52Z|00636|binding|INFO|Claiming lport 8de06222-5603-4a49-ac47-7db15cbb7e03 for this chassis.
Oct  2 08:55:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:52Z|00637|binding|INFO|8de06222-5603-4a49-ac47-7db15cbb7e03: Claiming fa:16:3e:68:46:e8 10.100.0.5
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.072 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:46:e8 10.100.0.5'], port_security=['fa:16:3e:68:46:e8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cba9797d-d8c0-42bb-99e8-21ff3406d1ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f08094d7-13d0-4e3d-b2f1-572cd1460e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de06222-5603-4a49-ac47-7db15cbb7e03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.073 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de06222-5603-4a49-ac47-7db15cbb7e03 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 bound to our chassis#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.074 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.087 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9020e6-94f3-4dc5-aa91-e839c77d33e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.088 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24ea8f37-71 in ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.090 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24ea8f37-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.090 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e9573746-00d5-4da1-92d5-4e793b3f8f82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.092 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[98e292c7-c17e-4de4-967a-20635f09ddce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 systemd-machined[188247]: New machine qemu-74-instance-00000097.
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.103 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[60474115-9622-4f32-8818-50520dfc3ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.129 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[092bb8a9-73a6-44d0-9b69-6e46e9c06699]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:52 np0005466030 systemd[1]: Started Virtual Machine qemu-74-instance-00000097.
Oct  2 08:55:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:52Z|00638|binding|INFO|Setting lport 8de06222-5603-4a49-ac47-7db15cbb7e03 ovn-installed in OVS
Oct  2 08:55:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:52Z|00639|binding|INFO|Setting lport 8de06222-5603-4a49-ac47-7db15cbb7e03 up in Southbound
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:52 np0005466030 systemd-udevd[290374]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.161 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[46ecb685-d0e0-415c-a582-752eb6691ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.165 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[465fda69-618d-4588-8d15-cb65009ee0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 NetworkManager[44960]: <info>  [1759409752.1667] manager: (tap24ea8f37-70): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Oct  2 08:55:52 np0005466030 systemd-udevd[290376]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:55:52 np0005466030 NetworkManager[44960]: <info>  [1759409752.1778] device (tap8de06222-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:55:52 np0005466030 NetworkManager[44960]: <info>  [1759409752.1788] device (tap8de06222-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.195 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[39a45076-0e26-4ca5-b3e2-fab0be62cd35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.198 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[729e69a0-5976-4e19-a30b-b9c115ca0f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 NetworkManager[44960]: <info>  [1759409752.2243] device (tap24ea8f37-70): carrier: link connected
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.230 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[223804fe-5334-4324-85d1-5c999598d5a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.246 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f115b032-d092-4059-9c34-6c81a7342b2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24ea8f37-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:22:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761578, 'reachable_time': 43305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290402, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.265 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[757e6cba-7c4a-4194-9dfc-de97bbc74690]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:22dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761578, 'tstamp': 761578}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290403, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfebcdf-0731-4c63-9f9f-36596ea53116]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24ea8f37-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:22:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761578, 'reachable_time': 43305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290404, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.307 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5c20b7-5b54-4c11-b5ae-99e511048ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.354 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2f6c88-5ed7-4ec1-912d-f4c36cdf02a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.355 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24ea8f37-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.356 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.356 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24ea8f37-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:52 np0005466030 kernel: tap24ea8f37-70: entered promiscuous mode
Oct  2 08:55:52 np0005466030 NetworkManager[44960]: <info>  [1759409752.3587] manager: (tap24ea8f37-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.362 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24ea8f37-70, col_values=(('external_ids', {'iface-id': '47294e02-9c49-4b4e-8e89-07ae56f17131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:52 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:52Z|00640|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.379 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.380 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e16c879f-1b14-432e-8f6e-67d8f8198c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.380 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:55:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.382 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'env', 'PROCESS_TAG=haproxy-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.622 2 DEBUG nova.compute.manager [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.622 2 DEBUG oslo_concurrency.lockutils [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.623 2 DEBUG oslo_concurrency.lockutils [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.623 2 DEBUG oslo_concurrency.lockutils [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:52 np0005466030 nova_compute[230518]: 2025-10-02 12:55:52.623 2 DEBUG nova.compute.manager [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Processing event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:55:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:52.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:55:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:52.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:55:52 np0005466030 podman[290476]: 2025-10-02 12:55:52.729018595 +0000 UTC m=+0.024059596 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.133 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409753.132523, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.134 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Started (Lifecycle Event)#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.137 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.141 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.147 2 INFO nova.virt.libvirt.driver [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance spawned successfully.#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.147 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.162 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.166 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.173 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.173 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.174 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.174 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.175 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.175 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.200 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.201 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409753.1327386, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.201 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.222 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.224 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409753.1408713, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.224 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.256 2 INFO nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 10.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.256 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.258 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.263 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.305 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.355 2 INFO nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 12.50 seconds to build instance.#033[00m
Oct  2 08:55:53 np0005466030 nova_compute[230518]: 2025-10-02 12:55:53.392 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:54 np0005466030 podman[290476]: 2025-10-02 12:55:54.179161271 +0000 UTC m=+1.474202242 container create c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:55:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:54 np0005466030 systemd[1]: Started libpod-conmon-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34.scope.
Oct  2 08:55:54 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:55:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:55:54.578 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:55:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:54.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.730 2 DEBUG nova.compute.manager [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.731 2 DEBUG oslo_concurrency.lockutils [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.731 2 DEBUG oslo_concurrency.lockutils [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.732 2 DEBUG oslo_concurrency.lockutils [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.732 2 DEBUG nova.compute.manager [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] No waiting events found dispatching network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:55:54 np0005466030 nova_compute[230518]: 2025-10-02 12:55:54.733 2 WARNING nova.compute.manager [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received unexpected event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:55:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:54.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:55 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe32515c7b59ffa2b21e24ce17dce5eb4995b61dc532d34935df1e3ca314d83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:55:55 np0005466030 podman[290476]: 2025-10-02 12:55:55.40633635 +0000 UTC m=+2.701377321 container init c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:55:55 np0005466030 podman[290476]: 2025-10-02 12:55:55.417587123 +0000 UTC m=+2.712628094 container start c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 08:55:55 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : New worker (290498) forked
Oct  2 08:55:55 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : Loading success.
Oct  2 08:55:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:56 np0005466030 nova_compute[230518]: 2025-10-02 12:55:56.716 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:56 np0005466030 nova_compute[230518]: 2025-10-02 12:55:56.718 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:56 np0005466030 nova_compute[230518]: 2025-10-02 12:55:56.742 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:55:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:56.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:56 np0005466030 nova_compute[230518]: 2025-10-02 12:55:56.866 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:56 np0005466030 nova_compute[230518]: 2025-10-02 12:55:56.868 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:56 np0005466030 nova_compute[230518]: 2025-10-02 12:55:56.883 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:55:56 np0005466030 nova_compute[230518]: 2025-10-02 12:55:56.884 2 INFO nova.compute.claims [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.035 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:55:57 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3254957633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.461 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.470 2 DEBUG nova.compute.provider_tree [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.511 2 DEBUG nova.scheduler.client.report [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.554 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.555 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.608 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.609 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.630 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.649 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.741 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.743 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.743 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating image(s)#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.786 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.807 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.827 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.830 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.896 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.898 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.898 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.899 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.919 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:55:57 np0005466030 nova_compute[230518]: 2025-10-02 12:55:57.922 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:55:58 np0005466030 nova_compute[230518]: 2025-10-02 12:55:58.618 2 DEBUG nova.policy [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:55:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:58.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:55:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:55:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:58.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:59 np0005466030 NetworkManager[44960]: <info>  [1759409759.0704] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct  2 08:55:59 np0005466030 NetworkManager[44960]: <info>  [1759409759.0712] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Oct  2 08:55:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:59 np0005466030 ovn_controller[129257]: 2025-10-02T12:55:59Z|00641|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.354 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.425 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.531 2 DEBUG nova.compute.manager [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.532 2 DEBUG nova.compute.manager [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing instance network info cache due to event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.532 2 DEBUG oslo_concurrency.lockutils [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.532 2 DEBUG oslo_concurrency.lockutils [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.533 2 DEBUG nova.network.neutron [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.538 2 DEBUG nova.objects.instance [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.560 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.561 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Ensure instance console log exists: /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.561 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.562 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.562 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:55:59 np0005466030 nova_compute[230518]: 2025-10-02 12:55:59.792 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Successfully created port: 6237bc28-d790-4861-976b-cda2e8dc93a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:00.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:00 np0005466030 podman[290697]: 2025-10-02 12:56:00.803049041 +0000 UTC m=+0.056151483 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:56:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:00.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:00 np0005466030 podman[290696]: 2025-10-02 12:56:00.831202485 +0000 UTC m=+0.084256585 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.877 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Successfully updated port: 6237bc28-d790-4861-976b-cda2e8dc93a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.897 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.898 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.899 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.932 2 DEBUG nova.network.neutron [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updated VIF entry in instance network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.934 2 DEBUG nova.network.neutron [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:00 np0005466030 nova_compute[230518]: 2025-10-02 12:56:00.968 2 DEBUG oslo_concurrency.lockutils [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:01 np0005466030 nova_compute[230518]: 2025-10-02 12:56:01.085 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:56:01 np0005466030 nova_compute[230518]: 2025-10-02 12:56:01.619 2 DEBUG nova.compute.manager [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:01 np0005466030 nova_compute[230518]: 2025-10-02 12:56:01.620 2 DEBUG nova.compute.manager [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing instance network info cache due to event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:56:01 np0005466030 nova_compute[230518]: 2025-10-02 12:56:01.620 2 DEBUG oslo_concurrency.lockutils [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.380 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.409 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.410 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance network_info: |[{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.410 2 DEBUG oslo_concurrency.lockutils [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.410 2 DEBUG nova.network.neutron [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.413 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start _get_guest_xml network_info=[{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.418 2 WARNING nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.423 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.424 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.428 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.428 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.429 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.429 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.430 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.430 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.430 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.432 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.432 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.432 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.434 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:02.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3350021409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.890 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.917 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:02 np0005466030 nova_compute[230518]: 2025-10-02 12:56:02.921 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1541807916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.501 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.504 2 DEBUG nova.virt.libvirt.vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:57Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.504 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.505 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.506 2 DEBUG nova.objects.instance [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.537 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <uuid>f6c0a66d-64f1-484a-ae4e-ece25fddf736</uuid>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <name>instance-00000099</name>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1633674489</nova:name>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:56:02</nova:creationTime>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <nova:port uuid="6237bc28-d790-4861-976b-cda2e8dc93a9">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <entry name="serial">f6c0a66d-64f1-484a-ae4e-ece25fddf736</entry>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <entry name="uuid">f6c0a66d-64f1-484a-ae4e-ece25fddf736</entry>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:f2:c9:23"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <target dev="tap6237bc28-d7"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/console.log" append="off"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:56:03 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:56:03 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:56:03 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:56:03 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.543 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Preparing to wait for external event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.544 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.544 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.545 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.546 2 DEBUG nova.virt.libvirt.vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:57Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.546 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.547 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.548 2 DEBUG os_vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6237bc28-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6237bc28-d7, col_values=(('external_ids', {'iface-id': '6237bc28-d790-4861-976b-cda2e8dc93a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:c9:23', 'vm-uuid': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:03 np0005466030 NetworkManager[44960]: <info>  [1759409763.5560] manager: (tap6237bc28-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.562 2 INFO os_vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7')#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.794 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.795 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.795 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:f2:c9:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.796 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Using config drive#033[00m
Oct  2 08:56:03 np0005466030 nova_compute[230518]: 2025-10-02 12:56:03.821 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:04 np0005466030 nova_compute[230518]: 2025-10-02 12:56:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:04 np0005466030 nova_compute[230518]: 2025-10-02 12:56:04.539 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating config drive at /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config#033[00m
Oct  2 08:56:04 np0005466030 nova_compute[230518]: 2025-10-02 12:56:04.544 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_ekpb6r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:04 np0005466030 nova_compute[230518]: 2025-10-02 12:56:04.602 2 DEBUG nova.network.neutron [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated VIF entry in instance network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:04 np0005466030 nova_compute[230518]: 2025-10-02 12:56:04.602 2 DEBUG nova.network.neutron [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:04 np0005466030 nova_compute[230518]: 2025-10-02 12:56:04.635 2 DEBUG oslo_concurrency.lockutils [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:04 np0005466030 nova_compute[230518]: 2025-10-02 12:56:04.702 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_ekpb6r" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:05 np0005466030 nova_compute[230518]: 2025-10-02 12:56:05.348 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:05 np0005466030 nova_compute[230518]: 2025-10-02 12:56:05.358 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:56:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:06.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:56:08 np0005466030 nova_compute[230518]: 2025-10-02 12:56:08.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:08 np0005466030 nova_compute[230518]: 2025-10-02 12:56:08.720 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:08 np0005466030 nova_compute[230518]: 2025-10-02 12:56:08.720 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deleting local config drive /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config because it was imported into RBD.#033[00m
Oct  2 08:56:08 np0005466030 kernel: tap6237bc28-d7: entered promiscuous mode
Oct  2 08:56:08 np0005466030 NetworkManager[44960]: <info>  [1759409768.7747] manager: (tap6237bc28-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Oct  2 08:56:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:08Z|00642|binding|INFO|Claiming lport 6237bc28-d790-4861-976b-cda2e8dc93a9 for this chassis.
Oct  2 08:56:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:08Z|00643|binding|INFO|6237bc28-d790-4861-976b-cda2e8dc93a9: Claiming fa:16:3e:f2:c9:23 10.100.0.13
Oct  2 08:56:08 np0005466030 nova_compute[230518]: 2025-10-02 12:56:08.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:08Z|00644|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 ovn-installed in OVS
Oct  2 08:56:08 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:08Z|00645|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 up in Southbound
Oct  2 08:56:08 np0005466030 nova_compute[230518]: 2025-10-02 12:56:08.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.797 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c9:23 10.100.0.13'], port_security=['fa:16:3e:f2:c9:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f30d280-519f-404a-a0ab-55bcf121986d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b996e116-bce4-4795-a454-74528663ff58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccf6ef12-40dd-424f-abb3-518813baf9b4, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6237bc28-d790-4861-976b-cda2e8dc93a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.799 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6237bc28-d790-4861-976b-cda2e8dc93a9 in datapath 5f30d280-519f-404a-a0ab-55bcf121986d bound to our chassis#033[00m
Oct  2 08:56:08 np0005466030 nova_compute[230518]: 2025-10-02 12:56:08.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.801 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5f30d280-519f-404a-a0ab-55bcf121986d#033[00m
Oct  2 08:56:08 np0005466030 systemd-udevd[290874]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.813 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3c721b0f-3a31-4d49-a4bd-c9b780b88cb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.814 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5f30d280-51 in ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.815 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5f30d280-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.815 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89f2c31f-feaf-4b57-ac3f-7c8bb4710324]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.817 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[afc719d2-9a62-44e1-9bfb-a86b0049c845]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 systemd-machined[188247]: New machine qemu-75-instance-00000099.
Oct  2 08:56:08 np0005466030 NetworkManager[44960]: <info>  [1759409768.8191] device (tap6237bc28-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:56:08 np0005466030 NetworkManager[44960]: <info>  [1759409768.8216] device (tap6237bc28-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.828 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[20a3d939-8514-40c2-a00a-817ef74ea81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 systemd[1]: Started Virtual Machine qemu-75-instance-00000099.
Oct  2 08:56:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:08.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.850 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8d4694-01ea-427a-8687-94974379bb8f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.877 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a4128ee4-189c-46bd-8114-dcd9dfa64631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.881 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f521a6d9-f7ba-4117-b195-fb74ae06316c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 NetworkManager[44960]: <info>  [1759409768.8822] manager: (tap5f30d280-50): new Veth device (/org/freedesktop/NetworkManager/Devices/306)
Oct  2 08:56:08 np0005466030 systemd-udevd[290879]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.915 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[33bd94fd-dd72-41af-95da-990cc544d904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.917 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16b97c7b-35f2-45f9-9d77-ea6347b97235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 NetworkManager[44960]: <info>  [1759409768.9371] device (tap5f30d280-50): carrier: link connected
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.943 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[883dffa8-86ef-498f-8a2f-3c18c50b3ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.959 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[961250d8-5830-42ee-b285-d0dacfecc55f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f30d280-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:61:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763249, 'reachable_time': 38595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290909, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.975 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e055f45c-28aa-49ab-b88e-6bee407850f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:61fc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763249, 'tstamp': 763249}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290910, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.994 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c3c361-7058-4b48-9f4b-ae2a6ed23b01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f30d280-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:61:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763249, 'reachable_time': 38595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290911, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.029 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e38bbbf-db5e-41c7-85fc-55d98aab6747]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.088 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6952cee6-4e6a-48c3-b446-ffd83d6ddc74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f30d280-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.090 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f30d280-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:09 np0005466030 kernel: tap5f30d280-50: entered promiscuous mode
Oct  2 08:56:09 np0005466030 NetworkManager[44960]: <info>  [1759409769.1105] manager: (tap5f30d280-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.112 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5f30d280-50, col_values=(('external_ids', {'iface-id': 'ffbc5ae9-0886-4744-96e3-66a615b2f014'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:09 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:09Z|00646|binding|INFO|Releasing lport ffbc5ae9-0886-4744-96e3-66a615b2f014 from this chassis (sb_readonly=0)
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.127 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.128 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[41ab4b73-5aa4-4850-9ba6-2e4243b48499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.129 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-5f30d280-519f-404a-a0ab-55bcf121986d
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 5f30d280-519f-404a-a0ab-55bcf121986d
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:56:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.130 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'env', 'PROCESS_TAG=haproxy-5f30d280-519f-404a-a0ab-55bcf121986d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5f30d280-519f-404a-a0ab-55bcf121986d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.288 2 DEBUG nova.compute.manager [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.288 2 DEBUG oslo_concurrency.lockutils [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.289 2 DEBUG oslo_concurrency.lockutils [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.289 2 DEBUG oslo_concurrency.lockutils [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.289 2 DEBUG nova.compute.manager [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Processing event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:56:09 np0005466030 podman[290975]: 2025-10-02 12:56:09.479083752 +0000 UTC m=+0.028392682 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:56:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3013568106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:09 np0005466030 nova_compute[230518]: 2025-10-02 12:56:09.918 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.119 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409770.1195302, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.120 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Started (Lifecycle Event)#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.123 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.126 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.129 2 INFO nova.virt.libvirt.driver [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance spawned successfully.#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.129 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:56:10 np0005466030 podman[290975]: 2025-10-02 12:56:10.251794765 +0000 UTC m=+0.801103625 container create e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.440 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.447 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.451 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:10 np0005466030 nova_compute[230518]: 2025-10-02 12:56:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 08:56:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 59K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1667 writes, 8136 keys, 1667 commit groups, 1.0 writes per commit group, ingest: 16.92 MB, 0.03 MB/s#012Interval WAL: 1666 writes, 1666 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.3      1.31              0.20        35    0.038       0      0       0.0       0.0#012  L6      1/0   10.25 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    116.2     98.0      3.36              0.95        34    0.099    215K    18K       0.0       0.0#012 Sum      1/0   10.25 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     83.5     85.7      4.67              1.15        69    0.068    215K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6     48.4     49.4      1.36              0.20        10    0.136     42K   2601       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    116.2     98.0      3.36              0.95        34    0.099    215K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.4      1.31              0.20        34    0.039       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.070, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.39 GB write, 0.10 MB/s write, 0.38 GB read, 0.09 MB/s read, 4.7 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 43.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000246 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2532,41.99 MB,13.8134%) FilterBlock(69,603.55 KB,0.193882%) IndexBlock(69,1.02 MB,0.336341%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 08:56:10 np0005466030 systemd[1]: Started libpod-conmon-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b.scope.
Oct  2 08:56:10 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:56:10 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/722445e320b05979190f4c22e24461a020bb72b6b235f9c49a131ad784afda90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:56:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:10.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:10.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:10 np0005466030 podman[290975]: 2025-10-02 12:56:10.917453978 +0000 UTC m=+1.466762848 container init e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 08:56:10 np0005466030 podman[290975]: 2025-10-02 12:56:10.92835153 +0000 UTC m=+1.477660400 container start e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 08:56:10 np0005466030 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : New worker (291060) forked
Oct  2 08:56:10 np0005466030 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : Loading success.
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.014 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.014 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.017 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.018 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.032 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.032 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409770.1220043, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.033 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.071 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.076 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409770.125318, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.076 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.105 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.109 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:11Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:46:e8 10.100.0.5
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.121 2 INFO nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 13.38 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.122 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:11 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:11Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:46:e8 10.100.0.5
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.137 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.238 2 INFO nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 14.40 seconds to build instance.#033[00m
Oct  2 08:56:11 np0005466030 podman[291019]: 2025-10-02 12:56:11.250046092 +0000 UTC m=+1.282633510 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.254 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.256 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4038MB free_disk=20.814746856689453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.256 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.256 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.280 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:11 np0005466030 podman[291020]: 2025-10-02 12:56:11.297698837 +0000 UTC m=+1.325533266 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.447 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance cba9797d-d8c0-42bb-99e8-21ff3406d1ff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.448 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance f6c0a66d-64f1-484a-ae4e-ece25fddf736 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.448 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.449 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.647 2 DEBUG nova.compute.manager [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.648 2 DEBUG oslo_concurrency.lockutils [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.648 2 DEBUG oslo_concurrency.lockutils [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.649 2 DEBUG oslo_concurrency.lockutils [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.649 2 DEBUG nova.compute.manager [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.649 2 WARNING nova.compute.manager [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:56:11 np0005466030 nova_compute[230518]: 2025-10-02 12:56:11.751 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 08:56:12 np0005466030 nova_compute[230518]: 2025-10-02 12:56:12.014 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 08:56:12 np0005466030 nova_compute[230518]: 2025-10-02 12:56:12.015 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 08:56:12 np0005466030 nova_compute[230518]: 2025-10-02 12:56:12.040 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 08:56:12 np0005466030 nova_compute[230518]: 2025-10-02 12:56:12.103 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 08:56:12 np0005466030 nova_compute[230518]: 2025-10-02 12:56:12.184 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:12.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3594326844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:13 np0005466030 nova_compute[230518]: 2025-10-02 12:56:13.251 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:13 np0005466030 nova_compute[230518]: 2025-10-02 12:56:13.257 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:13 np0005466030 nova_compute[230518]: 2025-10-02 12:56:13.295 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:13 np0005466030 nova_compute[230518]: 2025-10-02 12:56:13.334 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:56:13 np0005466030 nova_compute[230518]: 2025-10-02 12:56:13.335 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:13 np0005466030 nova_compute[230518]: 2025-10-02 12:56:13.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:14 np0005466030 nova_compute[230518]: 2025-10-02 12:56:14.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:14 np0005466030 nova_compute[230518]: 2025-10-02 12:56:14.317 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:14 np0005466030 nova_compute[230518]: 2025-10-02 12:56:14.318 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:14 np0005466030 nova_compute[230518]: 2025-10-02 12:56:14.318 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:14.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:14.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.305 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.306 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.307 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.307 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.308 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.308 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.309 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.309 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.340 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:15 np0005466030 nova_compute[230518]: 2025-10-02 12:56:15.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:16.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:16 np0005466030 nova_compute[230518]: 2025-10-02 12:56:16.800 2 INFO nova.compute.manager [None req-389aa584-869f-4e51-99b4-6c46dcb9550a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Get console output#033[00m
Oct  2 08:56:16 np0005466030 nova_compute[230518]: 2025-10-02 12:56:16.806 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:56:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:16.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.322 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.325 2 INFO nova.compute.manager [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Terminating instance#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.326 2 DEBUG nova.compute.manager [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:56:17 np0005466030 kernel: tap8de06222-56 (unregistering): left promiscuous mode
Oct  2 08:56:17 np0005466030 NetworkManager[44960]: <info>  [1759409777.6287] device (tap8de06222-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:17Z|00647|binding|INFO|Releasing lport 8de06222-5603-4a49-ac47-7db15cbb7e03 from this chassis (sb_readonly=0)
Oct  2 08:56:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:17Z|00648|binding|INFO|Setting lport 8de06222-5603-4a49-ac47-7db15cbb7e03 down in Southbound
Oct  2 08:56:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:17Z|00649|binding|INFO|Removing iface tap8de06222-56 ovn-installed in OVS
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.653 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:46:e8 10.100.0.5'], port_security=['fa:16:3e:68:46:e8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cba9797d-d8c0-42bb-99e8-21ff3406d1ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f08094d7-13d0-4e3d-b2f1-572cd1460e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de06222-5603-4a49-ac47-7db15cbb7e03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.655 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de06222-5603-4a49-ac47-7db15cbb7e03 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 unbound from our chassis#033[00m
Oct  2 08:56:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.658 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:56:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.659 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18e26f5a-a7cc-489c-9f43-3025a447aa70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.660 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 namespace which is not needed anymore#033[00m
Oct  2 08:56:17 np0005466030 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct  2 08:56:17 np0005466030 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000097.scope: Consumed 14.535s CPU time.
Oct  2 08:56:17 np0005466030 systemd-machined[188247]: Machine qemu-74-instance-00000097 terminated.
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.759 2 INFO nova.virt.libvirt.driver [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance destroyed successfully.#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.761 2 DEBUG nova.objects.instance [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.864 2 DEBUG nova.virt.libvirt.vif [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872091800',display_name='tempest-TestNetworkBasicOps-server-1872091800',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872091800',id=151,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSGOLZUJ3FrTIeEm0YCEFIRKta1oOKUh3K2cHX7D75D8mOr7z91wWb7O7IlUA8JdoZAVPTTXOOargDtG7eOD7M62+PfvZG7TnqVCQDZ9PY6Jtt6S6zET7dnTNArJzZa2A==',key_name='tempest-TestNetworkBasicOps-802197789',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-50zfty89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:53Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cba9797d-d8c0-42bb-99e8-21ff3406d1ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.865 2 DEBUG nova.network.os_vif_util [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.866 2 DEBUG nova.network.os_vif_util [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.866 2 DEBUG os_vif [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8de06222-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.873 2 INFO os_vif [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56')#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.960 2 DEBUG nova.compute.manager [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.961 2 DEBUG nova.compute.manager [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing instance network info cache due to event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.962 2 DEBUG oslo_concurrency.lockutils [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.963 2 DEBUG oslo_concurrency.lockutils [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:17 np0005466030 nova_compute[230518]: 2025-10-02 12:56:17.963 2 DEBUG nova.network.neutron [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:56:18 np0005466030 nova_compute[230518]: 2025-10-02 12:56:18.276 2 DEBUG nova.compute.manager [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-unplugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:18 np0005466030 nova_compute[230518]: 2025-10-02 12:56:18.277 2 DEBUG oslo_concurrency.lockutils [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:18 np0005466030 nova_compute[230518]: 2025-10-02 12:56:18.277 2 DEBUG oslo_concurrency.lockutils [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:18 np0005466030 nova_compute[230518]: 2025-10-02 12:56:18.277 2 DEBUG oslo_concurrency.lockutils [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:18 np0005466030 nova_compute[230518]: 2025-10-02 12:56:18.278 2 DEBUG nova.compute.manager [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] No waiting events found dispatching network-vif-unplugged-8de06222-5603-4a49-ac47-7db15cbb7e03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:18 np0005466030 nova_compute[230518]: 2025-10-02 12:56:18.278 2 DEBUG nova.compute.manager [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-unplugged-8de06222-5603-4a49-ac47-7db15cbb7e03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:56:18 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : haproxy version is 2.8.14-c23fe91
Oct  2 08:56:18 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : path to executable is /usr/sbin/haproxy
Oct  2 08:56:18 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [WARNING]  (290496) : Exiting Master process...
Oct  2 08:56:18 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [WARNING]  (290496) : Exiting Master process...
Oct  2 08:56:18 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [ALERT]    (290496) : Current worker (290498) exited with code 143 (Terminated)
Oct  2 08:56:18 np0005466030 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [WARNING]  (290496) : All workers exited. Exiting... (0)
Oct  2 08:56:18 np0005466030 systemd[1]: libpod-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34.scope: Deactivated successfully.
Oct  2 08:56:18 np0005466030 podman[291133]: 2025-10-02 12:56:18.359028812 +0000 UTC m=+0.567539346 container died c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:56:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:18.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Oct  2 08:56:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34-userdata-shm.mount: Deactivated successfully.
Oct  2 08:56:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay-8fe32515c7b59ffa2b21e24ce17dce5eb4995b61dc532d34935df1e3ca314d83-merged.mount: Deactivated successfully.
Oct  2 08:56:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:18 np0005466030 podman[291133]: 2025-10-02 12:56:18.993556999 +0000 UTC m=+1.202067553 container cleanup c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 08:56:19 np0005466030 systemd[1]: libpod-conmon-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34.scope: Deactivated successfully.
Oct  2 08:56:19 np0005466030 nova_compute[230518]: 2025-10-02 12:56:19.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:19 np0005466030 podman[291184]: 2025-10-02 12:56:19.394404225 +0000 UTC m=+0.376300187 container remove c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.402 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c29faa5-9f88-4ccd-a535-3469dddd2781]: (4, ('Thu Oct  2 12:56:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 (c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34)\nc187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34\nThu Oct  2 12:56:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 (c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34)\nc187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.404 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0937c9-ba29-4f45-a2c0-1758ede133e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.405 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24ea8f37-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:19 np0005466030 nova_compute[230518]: 2025-10-02 12:56:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466030 kernel: tap24ea8f37-70: left promiscuous mode
Oct  2 08:56:19 np0005466030 nova_compute[230518]: 2025-10-02 12:56:19.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.434 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b8289526-32e5-41e8-b5b9-3919100f47b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.468 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[233709ec-b602-45e7-bdc1-52bdef1eb0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.469 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[240e4974-1828-47bb-85a6-bf243f7dd648]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.484 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[797f4e63-a5ea-4d58-bb95-bba71f031b2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761571, 'reachable_time': 40372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291200, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.487 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:56:19 np0005466030 systemd[1]: run-netns-ovnmeta\x2d24ea8f37\x2d7508\x2d4c75\x2dae14\x2de4cc7b9f8e97.mount: Deactivated successfully.
Oct  2 08:56:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.488 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[db704aaa-d40c-4bca-af99-3573a1e0e6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.422 2 DEBUG nova.compute.manager [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.424 2 DEBUG oslo_concurrency.lockutils [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.424 2 DEBUG oslo_concurrency.lockutils [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.425 2 DEBUG oslo_concurrency.lockutils [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.425 2 DEBUG nova.compute.manager [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] No waiting events found dispatching network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.426 2 WARNING nova.compute.manager [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received unexpected event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.639 2 DEBUG nova.network.neutron [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated VIF entry in instance network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.640 2 DEBUG nova.network.neutron [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:20 np0005466030 nova_compute[230518]: 2025-10-02 12:56:20.666 2 DEBUG oslo_concurrency.lockutils [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:20.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:22 np0005466030 nova_compute[230518]: 2025-10-02 12:56:22.259 2 INFO nova.virt.libvirt.driver [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deleting instance files /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_del#033[00m
Oct  2 08:56:22 np0005466030 nova_compute[230518]: 2025-10-02 12:56:22.260 2 INFO nova.virt.libvirt.driver [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deletion of /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_del complete#033[00m
Oct  2 08:56:22 np0005466030 nova_compute[230518]: 2025-10-02 12:56:22.379 2 INFO nova.compute.manager [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 5.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:56:22 np0005466030 nova_compute[230518]: 2025-10-02 12:56:22.380 2 DEBUG oslo.service.loopingcall [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:56:22 np0005466030 nova_compute[230518]: 2025-10-02 12:56:22.380 2 DEBUG nova.compute.manager [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:56:22 np0005466030 nova_compute[230518]: 2025-10-02 12:56:22.380 2 DEBUG nova.network.neutron [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:56:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:22.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:22.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:22 np0005466030 nova_compute[230518]: 2025-10-02 12:56:22.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Oct  2 08:56:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:23Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:c9:23 10.100.0.13
Oct  2 08:56:23 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:23Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:c9:23 10.100.0.13
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.149 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:56:23 np0005466030 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:24 np0005466030 nova_compute[230518]: 2025-10-02 12:56:24.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.112 2 DEBUG nova.compute.manager [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-deleted-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.112 2 INFO nova.compute.manager [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Neutron deleted interface 8de06222-5603-4a49-ac47-7db15cbb7e03; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.113 2 DEBUG nova.network.neutron [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.135 2 DEBUG nova.network.neutron [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.155 2 INFO nova.compute.manager [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 3.77 seconds to deallocate network for instance.#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.160 2 DEBUG nova.compute.manager [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Detach interface failed, port_id=8de06222-5603-4a49-ac47-7db15cbb7e03, reason: Instance cba9797d-d8c0-42bb-99e8-21ff3406d1ff could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.227 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.227 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.306 2 DEBUG oslo_concurrency.processutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.693 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:26.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.712 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.712 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:56:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1580477687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.779 2 DEBUG oslo_concurrency.processutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.785 2 DEBUG nova.compute.provider_tree [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.812 2 DEBUG nova.scheduler.client.report [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.835 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.858 2 INFO nova.scheduler.client.report [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance cba9797d-d8c0-42bb-99e8-21ff3406d1ff#033[00m
Oct  2 08:56:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:26.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:26 np0005466030 nova_compute[230518]: 2025-10-02 12:56:26.940 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:27 np0005466030 nova_compute[230518]: 2025-10-02 12:56:27.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:28 np0005466030 nova_compute[230518]: 2025-10-02 12:56:28.707 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:28.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:28.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:29 np0005466030 nova_compute[230518]: 2025-10-02 12:56:29.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:30 np0005466030 nova_compute[230518]: 2025-10-02 12:56:30.139 2 INFO nova.compute.manager [None req-ec43cd3f-4ed5-419b-b163-436a3ffadd48 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Get console output#033[00m
Oct  2 08:56:30 np0005466030 nova_compute[230518]: 2025-10-02 12:56:30.145 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:56:30 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:30Z|00650|binding|INFO|Releasing lport ffbc5ae9-0886-4744-96e3-66a615b2f014 from this chassis (sb_readonly=0)
Oct  2 08:56:30 np0005466030 nova_compute[230518]: 2025-10-02 12:56:30.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:30.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:30.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:31 np0005466030 nova_compute[230518]: 2025-10-02 12:56:31.113 2 INFO nova.compute.manager [None req-506c1595-483a-428e-a160-0c2d18f1ab45 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Get console output#033[00m
Oct  2 08:56:31 np0005466030 nova_compute[230518]: 2025-10-02 12:56:31.118 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:56:31 np0005466030 podman[291225]: 2025-10-02 12:56:31.563354869 +0000 UTC m=+0.063714719 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:56:31 np0005466030 podman[291224]: 2025-10-02 12:56:31.661491488 +0000 UTC m=+0.153137706 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 08:56:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:56:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:32.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:56:32 np0005466030 nova_compute[230518]: 2025-10-02 12:56:32.759 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409777.7578926, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:32 np0005466030 nova_compute[230518]: 2025-10-02 12:56:32.759 2 INFO nova.compute.manager [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:56:32 np0005466030 nova_compute[230518]: 2025-10-02 12:56:32.798 2 DEBUG nova.compute.manager [None req-478e6ee6-55d9-4d7f-9bf7-ae31245cf451 - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:32 np0005466030 nova_compute[230518]: 2025-10-02 12:56:32.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:32.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:34 np0005466030 nova_compute[230518]: 2025-10-02 12:56:34.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:34.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:36.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:37 np0005466030 nova_compute[230518]: 2025-10-02 12:56:37.065 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Check if temp file /var/lib/nova/instances/tmppyu_z2_l exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Oct  2 08:56:37 np0005466030 nova_compute[230518]: 2025-10-02 12:56:37.066 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Oct  2 08:56:37 np0005466030 nova_compute[230518]: 2025-10-02 12:56:37.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.060676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798060708, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2185, "num_deletes": 261, "total_data_size": 4878572, "memory_usage": 4945416, "flush_reason": "Manual Compaction"}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798091866, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 3206826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57752, "largest_seqno": 59932, "table_properties": {"data_size": 3197927, "index_size": 5457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19297, "raw_average_key_size": 20, "raw_value_size": 3179726, "raw_average_value_size": 3382, "num_data_blocks": 236, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409626, "oldest_key_time": 1759409626, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 31241 microseconds, and 6106 cpu microseconds.
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.091913) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 3206826 bytes OK
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.091934) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.095898) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.095915) EVENT_LOG_v1 {"time_micros": 1759409798095910, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.095935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 4868668, prev total WAL file size 4868668, number of live WAL files 2.
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.097215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303133' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(3131KB)], [114(10MB)]
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798097303, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 13950276, "oldest_snapshot_seqno": -1}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 8563 keys, 13798790 bytes, temperature: kUnknown
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798190637, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 13798790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13739907, "index_size": 36371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 221406, "raw_average_key_size": 25, "raw_value_size": 13586078, "raw_average_value_size": 1586, "num_data_blocks": 1431, "num_entries": 8563, "num_filter_entries": 8563, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.190967) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13798790 bytes
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.194365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.3 rd, 147.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.2 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 9106, records dropped: 543 output_compression: NoCompression
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.194403) EVENT_LOG_v1 {"time_micros": 1759409798194390, "job": 72, "event": "compaction_finished", "compaction_time_micros": 93414, "compaction_time_cpu_micros": 31360, "output_level": 6, "num_output_files": 1, "total_output_size": 13798790, "num_input_records": 9106, "num_output_records": 8563, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798195263, "job": 72, "event": "table_file_deletion", "file_number": 116}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798197751, "job": 72, "event": "table_file_deletion", "file_number": 114}
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.097069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:56:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:38.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:39 np0005466030 nova_compute[230518]: 2025-10-02 12:56:39.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:39 np0005466030 nova_compute[230518]: 2025-10-02 12:56:39.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:40 np0005466030 nova_compute[230518]: 2025-10-02 12:56:40.858 2 DEBUG nova.compute.manager [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:40 np0005466030 nova_compute[230518]: 2025-10-02 12:56:40.858 2 DEBUG oslo_concurrency.lockutils [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:40 np0005466030 nova_compute[230518]: 2025-10-02 12:56:40.858 2 DEBUG oslo_concurrency.lockutils [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:40 np0005466030 nova_compute[230518]: 2025-10-02 12:56:40.859 2 DEBUG oslo_concurrency.lockutils [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:40 np0005466030 nova_compute[230518]: 2025-10-02 12:56:40.859 2 DEBUG nova.compute.manager [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:40 np0005466030 nova_compute[230518]: 2025-10-02 12:56:40.859 2 DEBUG nova.compute.manager [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:56:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:40.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.443 2 INFO nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 3.37 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.444 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.461 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(445166e1-a17f-4261-a4e4-29df5beb080c),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.464 2 DEBUG nova.objects.instance [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'migration_context' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.465 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.466 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.467 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.483 2 DEBUG nova.virt.libvirt.vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:11Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.483 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.484 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.484 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating guest XML with vif config: <interface type="ethernet">
Oct  2 08:56:41 np0005466030 nova_compute[230518]:  <mac address="fa:16:3e:f2:c9:23"/>
Oct  2 08:56:41 np0005466030 nova_compute[230518]:  <model type="virtio"/>
Oct  2 08:56:41 np0005466030 nova_compute[230518]:  <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:56:41 np0005466030 nova_compute[230518]:  <mtu size="1442"/>
Oct  2 08:56:41 np0005466030 nova_compute[230518]:  <target dev="tap6237bc28-d7"/>
Oct  2 08:56:41 np0005466030 nova_compute[230518]: </interface>
Oct  2 08:56:41 np0005466030 nova_compute[230518]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.485 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Oct  2 08:56:41 np0005466030 podman[291270]: 2025-10-02 12:56:41.79237185 +0000 UTC m=+0.047659737 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:56:41 np0005466030 podman[291271]: 2025-10-02 12:56:41.827186652 +0000 UTC m=+0.079403243 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.969 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:56:41 np0005466030 nova_compute[230518]: 2025-10-02 12:56:41.970 2 INFO nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.060 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.564 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.565 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:56:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.958 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.958 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.959 2 WARNING nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing instance network info cache due to event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:42 np0005466030 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG nova.network.neutron [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.069 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.069 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:56:43 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:43Z|00651|binding|INFO|Releasing lport ffbc5ae9-0886-4744-96e3-66a615b2f014 from this chassis (sb_readonly=0)
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.572 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.573 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.821 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409803.8211555, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.822 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.850 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.855 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:43 np0005466030 nova_compute[230518]: 2025-10-02 12:56:43.880 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.129 2 DEBUG nova.network.neutron [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated VIF entry in instance network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.130 2 DEBUG nova.network.neutron [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.148 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:44 np0005466030 kernel: tap6237bc28-d7 (unregistering): left promiscuous mode
Oct  2 08:56:44 np0005466030 NetworkManager[44960]: <info>  [1759409804.1584] device (tap6237bc28-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:44Z|00652|binding|INFO|Releasing lport 6237bc28-d790-4861-976b-cda2e8dc93a9 from this chassis (sb_readonly=0)
Oct  2 08:56:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:44Z|00653|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 down in Southbound
Oct  2 08:56:44 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:44Z|00654|binding|INFO|Removing iface tap6237bc28-d7 ovn-installed in OVS
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.176 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c9:23 10.100.0.13'], port_security=['fa:16:3e:f2:c9:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '17f11839-42bc-4ba9-92b4-53d0d88b0404'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f30d280-519f-404a-a0ab-55bcf121986d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b996e116-bce4-4795-a454-74528663ff58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccf6ef12-40dd-424f-abb3-518813baf9b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6237bc28-d790-4861-976b-cda2e8dc93a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.179 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6237bc28-d790-4861-976b-cda2e8dc93a9 in datapath 5f30d280-519f-404a-a0ab-55bcf121986d unbound from our chassis#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.181 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f30d280-519f-404a-a0ab-55bcf121986d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.182 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0526ac-76bd-4fe5-8d6c-c74857e85518]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.183 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d namespace which is not needed anymore#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:44 np0005466030 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct  2 08:56:44 np0005466030 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Consumed 14.869s CPU time.
Oct  2 08:56:44 np0005466030 systemd-machined[188247]: Machine qemu-75-instance-00000099 terminated.
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:44 np0005466030 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : haproxy version is 2.8.14-c23fe91
Oct  2 08:56:44 np0005466030 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : path to executable is /usr/sbin/haproxy
Oct  2 08:56:44 np0005466030 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [WARNING]  (291058) : Exiting Master process...
Oct  2 08:56:44 np0005466030 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [ALERT]    (291058) : Current worker (291060) exited with code 143 (Terminated)
Oct  2 08:56:44 np0005466030 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [WARNING]  (291058) : All workers exited. Exiting... (0)
Oct  2 08:56:44 np0005466030 systemd[1]: libpod-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b.scope: Deactivated successfully.
Oct  2 08:56:44 np0005466030 podman[291334]: 2025-10-02 12:56:44.315131155 +0000 UTC m=+0.045437086 container died e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:56:44 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b-userdata-shm.mount: Deactivated successfully.
Oct  2 08:56:44 np0005466030 systemd[1]: var-lib-containers-storage-overlay-722445e320b05979190f4c22e24461a020bb72b6b235f9c49a131ad784afda90-merged.mount: Deactivated successfully.
Oct  2 08:56:44 np0005466030 podman[291334]: 2025-10-02 12:56:44.365522326 +0000 UTC m=+0.095828267 container cleanup e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 08:56:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:44 np0005466030 systemd[1]: libpod-conmon-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b.scope: Deactivated successfully.
Oct  2 08:56:44 np0005466030 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_selinux on f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk: No such file or directory
Oct  2 08:56:44 np0005466030 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_dac on f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk: No such file or directory
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.434 2 DEBUG nova.virt.libvirt.guest [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.435 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration operation has completed#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.435 2 INFO nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] _post_live_migration() is started..#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.440 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.441 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.441 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Oct  2 08:56:44 np0005466030 podman[291363]: 2025-10-02 12:56:44.443455131 +0000 UTC m=+0.059484156 container remove e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.449 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2af29e7d-d4a5-401e-adab-b210ce2a3e10]: (4, ('Thu Oct  2 12:56:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d (e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b)\ne6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b\nThu Oct  2 12:56:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d (e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b)\ne6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.451 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfb3961-46d4-448a-bb5e-3b52a81ab42c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.452 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f30d280-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:44 np0005466030 kernel: tap5f30d280-50: left promiscuous mode
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.473 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1318b9-69a3-4bd6-a48d-0c67552c3aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.505 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b355aa-c128-4b0a-90b9-6f6ca8d714d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.507 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[807dd67a-b1ab-4d1e-8228-fa40b0d7a0f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.522 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[62d9dc27-688b-4fbb-9366-187cce81da25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763243, 'reachable_time': 37745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291407, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.524 2 DEBUG nova.compute.manager [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.524 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.524 2 DEBUG oslo_concurrency.lockutils [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.524 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfa10b2-e880-4b95-a9ec-753aa949b8ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG oslo_concurrency.lockutils [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG oslo_concurrency.lockutils [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG nova.compute.manager [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:44 np0005466030 systemd[1]: run-netns-ovnmeta\x2d5f30d280\x2d519f\x2d404a\x2da0ab\x2d55bcf121986d.mount: Deactivated successfully.
Oct  2 08:56:44 np0005466030 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG nova.compute.manager [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:56:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:44.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1756742258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.188 2 DEBUG nova.network.neutron [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Activated binding for port 6237bc28-d790-4861-976b-cda2e8dc93a9 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.189 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.190 2 DEBUG nova.virt.libvirt.vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:32Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.190 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.190 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.191 2 DEBUG os_vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.193 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6237bc28-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.197 2 INFO os_vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7')#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.199 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deleting instance files /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736_del#033[00m
Oct  2 08:56:45 np0005466030 nova_compute[230518]: 2025-10-02 12:56:45.199 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deletion of /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736_del complete#033[00m
Oct  2 08:56:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:56:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:56:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:46.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.739 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.740 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.740 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.741 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.741 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.742 2 WARNING nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.742 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.743 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.743 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.744 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.744 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.745 2 WARNING nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.745 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.746 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.746 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.747 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.747 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:46 np0005466030 nova_compute[230518]: 2025-10-02 12:56:46.748 2 WARNING nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.#033[00m
Oct  2 08:56:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:46.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:47 np0005466030 nova_compute[230518]: 2025-10-02 12:56:47.990 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:47 np0005466030 nova_compute[230518]: 2025-10-02 12:56:47.991 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.008 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.079 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.079 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.088 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.089 2 INFO nova.compute.claims [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.192 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2370856319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.598 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.603 2 DEBUG nova.compute.provider_tree [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.619 2 DEBUG nova.scheduler.client.report [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.650 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.651 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.697 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.697 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.720 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.737 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:56:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:48.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.829 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.831 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.831 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating image(s)#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.873 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:48.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.903 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.938 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:48 np0005466030 nova_compute[230518]: 2025-10-02 12:56:48.943 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.003 2 DEBUG nova.policy [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dfe96a8fa48c4243b6262a0359f5b208', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.020 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.021 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.021 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.021 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.055 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.060 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.520 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.595 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] resizing rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.628 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Successfully created port: 1ba1c69e-5414-4423-aa38-aea149f6f506 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.715 2 DEBUG nova.objects.instance [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'migration_context' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.731 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.732 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Ensure instance console log exists: /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.732 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.733 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:49 np0005466030 nova_compute[230518]: 2025-10-02 12:56:49.733 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.183 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.184 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.184 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.205 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:50.349 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:50.350 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.579 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Successfully updated port: 1ba1c69e-5414-4423-aa38-aea149f6f506 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.594 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.594 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.595 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:56:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1101605346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.632 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.659 2 DEBUG nova.compute.manager [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-changed-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.660 2 DEBUG nova.compute.manager [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Refreshing instance network info cache due to event network-changed-1ba1c69e-5414-4423-aa38-aea149f6f506. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.661 2 DEBUG oslo_concurrency.lockutils [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:50.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.831 2 WARNING nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.833 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4317MB free_disk=20.851459503173828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.833 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.834 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.876 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Migration for instance f6c0a66d-64f1-484a-ae4e-ece25fddf736 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.897 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Oct  2 08:56:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:50.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.917 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Migration 445166e1-a17f-4261-a4e4-29df5beb080c is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.918 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.918 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.919 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:56:50 np0005466030 nova_compute[230518]: 2025-10-02 12:56:50.980 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.292 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:56:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:56:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1856637000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.406 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.411 2 DEBUG nova.compute.provider_tree [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.429 2 DEBUG nova.scheduler.client.report [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.455 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.455 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.460 2 INFO nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.561 2 INFO nova.scheduler.client.report [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Deleted allocation for migration 445166e1-a17f-4261-a4e4-29df5beb080c#033[00m
Oct  2 08:56:51 np0005466030 nova_compute[230518]: 2025-10-02 12:56:51.561 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Oct  2 08:56:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:56:52 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.495 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.579 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.580 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance network_info: |[{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.581 2 DEBUG oslo_concurrency.lockutils [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.581 2 DEBUG nova.network.neutron [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Refreshing network info cache for port 1ba1c69e-5414-4423-aa38-aea149f6f506 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.585 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start _get_guest_xml network_info=[{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.590 2 WARNING nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.594 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.595 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.598 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.599 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.600 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.601 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.601 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.602 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.602 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.602 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.603 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.603 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.603 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.604 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.604 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.604 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:56:52 np0005466030 nova_compute[230518]: 2025-10-02 12:56:52.608 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:52.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/233049042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.113 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.148 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.152 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:56:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1654015826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.569 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.571 2 DEBUG nova.virt.libvirt.vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:48Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.571 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.572 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.573 2 DEBUG nova.objects.instance [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'pci_devices' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.595 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <uuid>c793e384-4ddd-4531-b6fd-172ee2fcbd4d</uuid>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <name>instance-0000009c</name>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerRescueTestJSON-server-501698818</nova:name>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:56:52</nova:creationTime>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:user uuid="dfe96a8fa48c4243b6262a0359f5b208">tempest-ServerRescueTestJSON-791200975-project-member</nova:user>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:project uuid="d8f55f9d9ed144629bd9a03edb020c4f">tempest-ServerRescueTestJSON-791200975</nova:project>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <nova:port uuid="1ba1c69e-5414-4423-aa38-aea149f6f506">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <entry name="serial">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <entry name="uuid">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:27:9d:59"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <target dev="tap1ba1c69e-54"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/console.log" append="off"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:56:53 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:56:53 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:56:53 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:56:53 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.597 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Preparing to wait for external event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.597 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.597 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.598 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.598 2 DEBUG nova.virt.libvirt.vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:48Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.599 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.599 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.600 2 DEBUG os_vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ba1c69e-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ba1c69e-54, col_values=(('external_ids', {'iface-id': '1ba1c69e-5414-4423-aa38-aea149f6f506', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:9d:59', 'vm-uuid': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:53 np0005466030 NetworkManager[44960]: <info>  [1759409813.6084] manager: (tap1ba1c69e-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.614 2 INFO os_vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54')#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.772 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.772 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.773 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No VIF found with MAC fa:16:3e:27:9d:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.773 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Using config drive#033[00m
Oct  2 08:56:53 np0005466030 nova_compute[230518]: 2025-10-02 12:56:53.795 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.469 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating config drive at /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.473 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmoape_iy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.605 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmoape_iy" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.638 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.643 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:56:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:54.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.816 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.817 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deleting local config drive /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config because it was imported into RBD.#033[00m
Oct  2 08:56:54 np0005466030 kernel: tap1ba1c69e-54: entered promiscuous mode
Oct  2 08:56:54 np0005466030 NetworkManager[44960]: <info>  [1759409814.8938] manager: (tap1ba1c69e-54): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Oct  2 08:56:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:54Z|00655|binding|INFO|Claiming lport 1ba1c69e-5414-4423-aa38-aea149f6f506 for this chassis.
Oct  2 08:56:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:54Z|00656|binding|INFO|1ba1c69e-5414-4423-aa38-aea149f6f506: Claiming fa:16:3e:27:9d:59 10.100.0.3
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.903 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:56:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.904 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 bound to our chassis#033[00m
Oct  2 08:56:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.906 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:56:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:56:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:54.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:56:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.908 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40806447-26e6-41d8-9ba6-d2424055d3c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:54Z|00657|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 ovn-installed in OVS
Oct  2 08:56:54 np0005466030 ovn_controller[129257]: 2025-10-02T12:56:54Z|00658|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 up in Southbound
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:54 np0005466030 systemd-udevd[291943]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:56:54 np0005466030 systemd-machined[188247]: New machine qemu-76-instance-0000009c.
Oct  2 08:56:54 np0005466030 NetworkManager[44960]: <info>  [1759409814.9755] device (tap1ba1c69e-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:56:54 np0005466030 NetworkManager[44960]: <info>  [1759409814.9768] device (tap1ba1c69e-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:56:54 np0005466030 systemd[1]: Started Virtual Machine qemu-76-instance-0000009c.
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.997 2 DEBUG nova.network.neutron [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updated VIF entry in instance network info cache for port 1ba1c69e-5414-4423-aa38-aea149f6f506. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:56:54 np0005466030 nova_compute[230518]: 2025-10-02 12:56:54.998 2 DEBUG nova.network.neutron [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:56:55 np0005466030 nova_compute[230518]: 2025-10-02 12:56:55.024 2 DEBUG oslo_concurrency.lockutils [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:56:55 np0005466030 nova_compute[230518]: 2025-10-02 12:56:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:56:55 np0005466030 nova_compute[230518]: 2025-10-02 12:56:55.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 08:56:55 np0005466030 nova_compute[230518]: 2025-10-02 12:56:55.090 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 08:56:55 np0005466030 nova_compute[230518]: 2025-10-02 12:56:55.997 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409815.997618, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:55 np0005466030 nova_compute[230518]: 2025-10-02 12:56:55.998 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Started (Lifecycle Event)#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.034 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.038 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409816.0021856, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.039 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.057 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.060 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.076 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:56.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.956 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.957 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Processing event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.959 2 WARNING nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.960 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.964 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409816.9646475, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.965 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.967 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.970 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance spawned successfully.#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.971 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:56:56 np0005466030 nova_compute[230518]: 2025-10-02 12:56:56.993 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.002 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.010 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.011 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.011 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.012 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.012 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.012 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.042 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.072 2 INFO nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 8.24 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.073 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.134 2 INFO nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 9.08 seconds to build instance.#033[00m
Oct  2 08:56:57 np0005466030 nova_compute[230518]: 2025-10-02 12:56:57.149 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:56:58 np0005466030 nova_compute[230518]: 2025-10-02 12:56:58.496 2 INFO nova.compute.manager [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Rescuing#033[00m
Oct  2 08:56:58 np0005466030 nova_compute[230518]: 2025-10-02 12:56:58.497 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:56:58 np0005466030 nova_compute[230518]: 2025-10-02 12:56:58.497 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:56:58 np0005466030 nova_compute[230518]: 2025-10-02 12:56:58.497 2 DEBUG nova.network.neutron [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:56:58 np0005466030 nova_compute[230518]: 2025-10-02 12:56:58.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:56:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:56:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:56:59 np0005466030 nova_compute[230518]: 2025-10-02 12:56:59.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:56:59 np0005466030 nova_compute[230518]: 2025-10-02 12:56:59.435 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409804.4339073, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:56:59 np0005466030 nova_compute[230518]: 2025-10-02 12:56:59.436 2 INFO nova.compute.manager [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:56:59 np0005466030 nova_compute[230518]: 2025-10-02 12:56:59.457 2 DEBUG nova.compute.manager [None req-90a00622-e2a1-4949-8013-8557f45bb606 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.738451) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819738503, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 503, "num_deletes": 251, "total_data_size": 648538, "memory_usage": 659192, "flush_reason": "Manual Compaction"}
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819782448, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 427620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59937, "largest_seqno": 60435, "table_properties": {"data_size": 424910, "index_size": 746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6701, "raw_average_key_size": 19, "raw_value_size": 419428, "raw_average_value_size": 1205, "num_data_blocks": 32, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409798, "oldest_key_time": 1759409798, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 44049 microseconds, and 2310 cpu microseconds.
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.782502) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 427620 bytes OK
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.782525) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.871566) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.871621) EVENT_LOG_v1 {"time_micros": 1759409819871608, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.871647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 645522, prev total WAL file size 645522, number of live WAL files 2.
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.872410) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(417KB)], [117(13MB)]
Oct  2 08:56:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819872478, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14226410, "oldest_snapshot_seqno": -1}
Oct  2 08:56:59 np0005466030 nova_compute[230518]: 2025-10-02 12:56:59.995 2 DEBUG nova.network.neutron [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:00 np0005466030 nova_compute[230518]: 2025-10-02 12:57:00.027 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 8397 keys, 12292727 bytes, temperature: kUnknown
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409820063008, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12292727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12236246, "index_size": 34353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 218669, "raw_average_key_size": 26, "raw_value_size": 12086533, "raw_average_value_size": 1439, "num_data_blocks": 1340, "num_entries": 8397, "num_filter_entries": 8397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.063323) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12292727 bytes
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.073791) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.6 rd, 64.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(62.0) write-amplify(28.7) OK, records in: 8911, records dropped: 514 output_compression: NoCompression
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.073830) EVENT_LOG_v1 {"time_micros": 1759409820073813, "job": 74, "event": "compaction_finished", "compaction_time_micros": 190599, "compaction_time_cpu_micros": 34108, "output_level": 6, "num_output_files": 1, "total_output_size": 12292727, "num_input_records": 8911, "num_output_records": 8397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409820074124, "job": 74, "event": "table_file_deletion", "file_number": 119}
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409820077105, "job": 74, "event": "table_file_deletion", "file_number": 117}
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.872273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:00 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 08:57:00 np0005466030 nova_compute[230518]: 2025-10-02 12:57:00.303 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 08:57:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:00.353 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:00.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:00 np0005466030 nova_compute[230518]: 2025-10-02 12:57:00.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:01 np0005466030 nova_compute[230518]: 2025-10-02 12:57:01.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:01 np0005466030 podman[291997]: 2025-10-02 12:57:01.821381435 +0000 UTC m=+0.061202551 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:57:01 np0005466030 podman[291996]: 2025-10-02 12:57:01.851228912 +0000 UTC m=+0.091397149 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 08:57:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:02.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:03 np0005466030 nova_compute[230518]: 2025-10-02 12:57:03.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:04 np0005466030 nova_compute[230518]: 2025-10-02 12:57:04.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:04.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:57:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:06.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:57:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:06.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:08 np0005466030 nova_compute[230518]: 2025-10-02 12:57:08.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:08.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:08.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:09 np0005466030 nova_compute[230518]: 2025-10-02 12:57:09.091 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:09 np0005466030 nova_compute[230518]: 2025-10-02 12:57:09.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:09 np0005466030 nova_compute[230518]: 2025-10-02 12:57:09.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:09 np0005466030 nova_compute[230518]: 2025-10-02 12:57:09.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:09 np0005466030 nova_compute[230518]: 2025-10-02 12:57:09.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:09 np0005466030 nova_compute[230518]: 2025-10-02 12:57:09.729 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:57:09 np0005466030 nova_compute[230518]: 2025-10-02 12:57:09.729 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4262556935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.151 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.276 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.277 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.354 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.434 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.435 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4144MB free_disk=20.855342864990234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.435 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.436 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.536 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.536 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.536 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:57:10 np0005466030 nova_compute[230518]: 2025-10-02 12:57:10.578 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:10.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:57:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2546884810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:57:11 np0005466030 nova_compute[230518]: 2025-10-02 12:57:11.004 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:11 np0005466030 nova_compute[230518]: 2025-10-02 12:57:11.009 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:57:11 np0005466030 nova_compute[230518]: 2025-10-02 12:57:11.040 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:57:11 np0005466030 nova_compute[230518]: 2025-10-02 12:57:11.076 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:57:11 np0005466030 nova_compute[230518]: 2025-10-02 12:57:11.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:12.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:12 np0005466030 podman[292089]: 2025-10-02 12:57:12.823306597 +0000 UTC m=+0.072946630 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:57:12 np0005466030 podman[292090]: 2025-10-02 12:57:12.833890949 +0000 UTC m=+0.074918401 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct  2 08:57:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:13 np0005466030 nova_compute[230518]: 2025-10-02 12:57:13.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:14 np0005466030 nova_compute[230518]: 2025-10-02 12:57:14.035 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:14 np0005466030 nova_compute[230518]: 2025-10-02 12:57:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:14 np0005466030 nova_compute[230518]: 2025-10-02 12:57:14.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:14.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:57:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:57:15 np0005466030 nova_compute[230518]: 2025-10-02 12:57:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:15 np0005466030 nova_compute[230518]: 2025-10-02 12:57:15.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:57:16 np0005466030 nova_compute[230518]: 2025-10-02 12:57:16.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:16 np0005466030 nova_compute[230518]: 2025-10-02 12:57:16.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:16 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  2 08:57:16 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  2 08:57:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:16.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:17 np0005466030 kernel: tap1ba1c69e-54 (unregistering): left promiscuous mode
Oct  2 08:57:17 np0005466030 NetworkManager[44960]: <info>  [1759409837.3578] device (tap1ba1c69e-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:57:17Z|00659|binding|INFO|Releasing lport 1ba1c69e-5414-4423-aa38-aea149f6f506 from this chassis (sb_readonly=0)
Oct  2 08:57:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:57:17Z|00660|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 down in Southbound
Oct  2 08:57:17 np0005466030 ovn_controller[129257]: 2025-10-02T12:57:17Z|00661|binding|INFO|Removing iface tap1ba1c69e-54 ovn-installed in OVS
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:17 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  2 08:57:17 np0005466030 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Oct  2 08:57:17 np0005466030 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009c.scope: Consumed 14.133s CPU time.
Oct  2 08:57:17 np0005466030 systemd-machined[188247]: Machine qemu-76-instance-0000009c terminated.
Oct  2 08:57:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.457 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.458 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 unbound from our chassis#033[00m
Oct  2 08:57:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.459 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:57:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.461 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[468bb252-a109-4523-a0d0-ba3dce55f5d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.607 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance shutdown successfully after 17 seconds.#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.614 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance destroyed successfully.#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.614 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'numa_topology' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.675 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Attempting rescue#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.676 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.679 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.680 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating image(s)#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.705 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.708 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.803 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.834 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.838 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:17 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.876 2 DEBUG nova.compute.manager [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.876 2 DEBUG oslo_concurrency.lockutils [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.877 2 DEBUG oslo_concurrency.lockutils [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.877 2 DEBUG oslo_concurrency.lockutils [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.877 2 DEBUG nova.compute.manager [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.877 2 WARNING nova.compute.manager [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.919 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.920 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.921 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.921 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.946 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:17 np0005466030 nova_compute[230518]: 2025-10-02 12:57:17.950 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:18 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  2 08:57:18 np0005466030 nova_compute[230518]: 2025-10-02 12:57:18.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:18.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:18 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  2 08:57:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.488 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.489 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'migration_context' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.521 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.522 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start _get_guest_xml network_info=[{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:27:9d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.522 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'resources' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.565 2 WARNING nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.574 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.574 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.578 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.578 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.579 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.580 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.580 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.580 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.611 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.975 2 DEBUG nova.compute.manager [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.976 2 DEBUG oslo_concurrency.lockutils [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.977 2 DEBUG oslo_concurrency.lockutils [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.977 2 DEBUG oslo_concurrency.lockutils [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.977 2 DEBUG nova.compute.manager [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:19 np0005466030 nova_compute[230518]: 2025-10-02 12:57:19.978 2 WARNING nova.compute.manager [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:57:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1416982405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:20 np0005466030 nova_compute[230518]: 2025-10-02 12:57:20.074 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:20 np0005466030 nova_compute[230518]: 2025-10-02 12:57:20.076 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/12721875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:20 np0005466030 nova_compute[230518]: 2025-10-02 12:57:20.556 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:20 np0005466030 nova_compute[230518]: 2025-10-02 12:57:20.557 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:20.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:20.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3284991121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.002 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.004 2 DEBUG nova.virt.libvirt.vif [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:57Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:27:9d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.004 2 DEBUG nova.network.os_vif_util [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:27:9d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.005 2 DEBUG nova.network.os_vif_util [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.006 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'pci_devices' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.029 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <uuid>c793e384-4ddd-4531-b6fd-172ee2fcbd4d</uuid>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <name>instance-0000009c</name>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServerRescueTestJSON-server-501698818</nova:name>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:57:19</nova:creationTime>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:user uuid="dfe96a8fa48c4243b6262a0359f5b208">tempest-ServerRescueTestJSON-791200975-project-member</nova:user>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:project uuid="d8f55f9d9ed144629bd9a03edb020c4f">tempest-ServerRescueTestJSON-791200975</nova:project>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <nova:port uuid="1ba1c69e-5414-4423-aa38-aea149f6f506">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <entry name="serial">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <entry name="uuid">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <target dev="vdb" bus="virtio"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:27:9d:59"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <target dev="tap1ba1c69e-54"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/console.log" append="off"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:57:21 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:57:21 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:57:21 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:57:21 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.036 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance destroyed successfully.#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.117 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.118 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.118 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.118 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No VIF found with MAC fa:16:3e:27:9d:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.119 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Using config drive#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.146 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.197 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.237 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'keypairs' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.839 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating config drive at /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.844 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuc5iuqij execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:21 np0005466030 nova_compute[230518]: 2025-10-02 12:57:21.980 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuc5iuqij" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.012 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.017 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.465 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.467 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deleting local config drive /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue because it was imported into RBD.#033[00m
Oct  2 08:57:22 np0005466030 kernel: tap1ba1c69e-54: entered promiscuous mode
Oct  2 08:57:22 np0005466030 NetworkManager[44960]: <info>  [1759409842.5443] manager: (tap1ba1c69e-54): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Oct  2 08:57:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:57:22Z|00662|binding|INFO|Claiming lport 1ba1c69e-5414-4423-aa38-aea149f6f506 for this chassis.
Oct  2 08:57:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:57:22Z|00663|binding|INFO|1ba1c69e-5414-4423-aa38-aea149f6f506: Claiming fa:16:3e:27:9d:59 10.100.0.3
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.560 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.561 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 bound to our chassis#033[00m
Oct  2 08:57:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.562 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:57:22 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.563 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b53bb1-7651-413e-9853-65f545cd6ce9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:57:22Z|00664|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 ovn-installed in OVS
Oct  2 08:57:22 np0005466030 ovn_controller[129257]: 2025-10-02T12:57:22Z|00665|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 up in Southbound
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466030 nova_compute[230518]: 2025-10-02 12:57:22.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:22 np0005466030 systemd-machined[188247]: New machine qemu-77-instance-0000009c.
Oct  2 08:57:22 np0005466030 systemd[1]: Started Virtual Machine qemu-77-instance-0000009c.
Oct  2 08:57:22 np0005466030 systemd-udevd[292380]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:57:22 np0005466030 NetworkManager[44960]: <info>  [1759409842.6251] device (tap1ba1c69e-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:57:22 np0005466030 NetworkManager[44960]: <info>  [1759409842.6264] device (tap1ba1c69e-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:57:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:22.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:22.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.143 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.143 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.144 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.144 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.361 2 DEBUG nova.compute.manager [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.361 2 DEBUG oslo_concurrency.lockutils [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.361 2 DEBUG oslo_concurrency.lockutils [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.362 2 DEBUG oslo_concurrency.lockutils [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.362 2 DEBUG nova.compute.manager [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.362 2 WARNING nova.compute.manager [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state active and task_state rescuing.#033[00m
Oct  2 08:57:23 np0005466030 nova_compute[230518]: 2025-10-02 12:57:23.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.293 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c793e384-4ddd-4531-b6fd-172ee2fcbd4d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.293 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409844.2926095, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.293 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.297 2 DEBUG nova.compute.manager [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.334 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.337 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.372 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.372 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409844.2936769, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.372 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Started (Lifecycle Event)#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.403 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:57:24 np0005466030 nova_compute[230518]: 2025-10-02 12:57:24.407 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:57:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:24.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.372 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG nova.compute.manager [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG oslo_concurrency.lockutils [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG oslo_concurrency.lockutils [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG oslo_concurrency.lockutils [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG nova.compute.manager [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.731 2 WARNING nova.compute.manager [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state rescued and task_state None.#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.770 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:57:25 np0005466030 nova_compute[230518]: 2025-10-02 12:57:25.770 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:57:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:57:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:57:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:57:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:57:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:57:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:28 np0005466030 nova_compute[230518]: 2025-10-02 12:57:28.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:28.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:29 np0005466030 nova_compute[230518]: 2025-10-02 12:57:29.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:30.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:32 np0005466030 podman[292451]: 2025-10-02 12:57:32.806391706 +0000 UTC m=+0.054823361 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 08:57:32 np0005466030 podman[292450]: 2025-10-02 12:57:32.849024723 +0000 UTC m=+0.097460139 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 08:57:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:57:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:57:33 np0005466030 nova_compute[230518]: 2025-10-02 12:57:33.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Oct  2 08:57:34 np0005466030 nova_compute[230518]: 2025-10-02 12:57:34.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:34.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:57:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591168438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:57:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:36.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:38 np0005466030 nova_compute[230518]: 2025-10-02 12:57:38.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:38.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:39 np0005466030 nova_compute[230518]: 2025-10-02 12:57:39.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:40.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:57:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:57:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Oct  2 08:57:43 np0005466030 nova_compute[230518]: 2025-10-02 12:57:43.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:43 np0005466030 podman[292496]: 2025-10-02 12:57:43.802983362 +0000 UTC m=+0.052424960 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 08:57:43 np0005466030 podman[292495]: 2025-10-02 12:57:43.820537057 +0000 UTC m=+0.068610752 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 08:57:44 np0005466030 nova_compute[230518]: 2025-10-02 12:57:44.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Oct  2 08:57:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:46.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:57:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:57:48 np0005466030 nova_compute[230518]: 2025-10-02 12:57:48.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:48.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:49 np0005466030 nova_compute[230518]: 2025-10-02 12:57:49.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:50.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:52 np0005466030 nova_compute[230518]: 2025-10-02 12:57:52.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:52.736 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:57:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:52.738 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:57:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:57:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:57:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Oct  2 08:57:53 np0005466030 nova_compute[230518]: 2025-10-02 12:57:53.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:54 np0005466030 nova_compute[230518]: 2025-10-02 12:57:54.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:57:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:57:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:57:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:57:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:57:54.740 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:57:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:56.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:58 np0005466030 nova_compute[230518]: 2025-10-02 12:57:58.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:58.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:57:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:57:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:57:59 np0005466030 nova_compute[230518]: 2025-10-02 12:57:59.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:57:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:00.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:02.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:58:03 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:58:03 np0005466030 nova_compute[230518]: 2025-10-02 12:58:03.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:03 np0005466030 podman[292718]: 2025-10-02 12:58:03.815034889 +0000 UTC m=+0.056991252 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 08:58:03 np0005466030 podman[292717]: 2025-10-02 12:58:03.837182578 +0000 UTC m=+0.084634581 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 08:58:04 np0005466030 nova_compute[230518]: 2025-10-02 12:58:04.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:58:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:04.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:58:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 08:58:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 08:58:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 08:58:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 08:58:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:06.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:08 np0005466030 nova_compute[230518]: 2025-10-02 12:58:08.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:08.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:09 np0005466030 nova_compute[230518]: 2025-10-02 12:58:09.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:10.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4122840404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.520 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.590 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.591 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.591 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.720 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.721 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4106MB free_disk=20.7608642578125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.721 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.721 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.888 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:58:11 np0005466030 nova_compute[230518]: 2025-10-02 12:58:11.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:58:12 np0005466030 nova_compute[230518]: 2025-10-02 12:58:12.195 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1559841655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:12 np0005466030 nova_compute[230518]: 2025-10-02 12:58:12.609 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:12 np0005466030 nova_compute[230518]: 2025-10-02 12:58:12.614 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:58:12 np0005466030 nova_compute[230518]: 2025-10-02 12:58:12.631 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:58:12 np0005466030 nova_compute[230518]: 2025-10-02 12:58:12.667 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:58:12 np0005466030 nova_compute[230518]: 2025-10-02 12:58:12.668 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:12.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:58:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:58:13 np0005466030 nova_compute[230518]: 2025-10-02 12:58:13.663 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:13 np0005466030 nova_compute[230518]: 2025-10-02 12:58:13.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:14 np0005466030 nova_compute[230518]: 2025-10-02 12:58:14.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625827684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:14 np0005466030 podman[292807]: 2025-10-02 12:58:14.802829124 +0000 UTC m=+0.051645666 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:58:14 np0005466030 podman[292806]: 2025-10-02 12:58:14.806035764 +0000 UTC m=+0.058289483 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:58:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:14.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:15 np0005466030 nova_compute[230518]: 2025-10-02 12:58:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:16 np0005466030 nova_compute[230518]: 2025-10-02 12:58:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:16 np0005466030 nova_compute[230518]: 2025-10-02 12:58:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:16 np0005466030 nova_compute[230518]: 2025-10-02 12:58:16.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:58:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:17 np0005466030 nova_compute[230518]: 2025-10-02 12:58:17.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:17 np0005466030 nova_compute[230518]: 2025-10-02 12:58:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:18 np0005466030 nova_compute[230518]: 2025-10-02 12:58:18.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:18.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:19 np0005466030 nova_compute[230518]: 2025-10-02 12:58:19.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:20.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:21 np0005466030 nova_compute[230518]: 2025-10-02 12:58:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:22.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:23.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.797 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.800 2 INFO nova.compute.manager [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Terminating instance#033[00m
Oct  2 08:58:23 np0005466030 nova_compute[230518]: 2025-10-02 12:58:23.800 2 DEBUG nova.compute.manager [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 08:58:24 np0005466030 kernel: tap1ba1c69e-54 (unregistering): left promiscuous mode
Oct  2 08:58:24 np0005466030 NetworkManager[44960]: <info>  [1759409904.2542] device (tap1ba1c69e-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:24Z|00666|binding|INFO|Releasing lport 1ba1c69e-5414-4423-aa38-aea149f6f506 from this chassis (sb_readonly=0)
Oct  2 08:58:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:24Z|00667|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 down in Southbound
Oct  2 08:58:24 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:24Z|00668|binding|INFO|Removing iface tap1ba1c69e-54 ovn-installed in OVS
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.266 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.267 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 unbound from our chassis#033[00m
Oct  2 08:58:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.268 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  2 08:58:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.269 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3eb9c8-7f1d-4dac-bc20-267f74184d95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:24 np0005466030 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Oct  2 08:58:24 np0005466030 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009c.scope: Consumed 16.007s CPU time.
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:24 np0005466030 systemd-machined[188247]: Machine qemu-77-instance-0000009c terminated.
Oct  2 08:58:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.438 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance destroyed successfully.#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.439 2 DEBUG nova.objects.instance [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'resources' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.463 2 DEBUG nova.virt.libvirt.vif [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:24Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.463 2 DEBUG nova.network.os_vif_util [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.464 2 DEBUG nova.network.os_vif_util [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.464 2 DEBUG os_vif [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ba1c69e-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.476 2 DEBUG nova.compute.manager [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.476 2 DEBUG oslo_concurrency.lockutils [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.478 2 DEBUG oslo_concurrency.lockutils [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.478 2 DEBUG oslo_concurrency.lockutils [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.478 2 DEBUG nova.compute.manager [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.479 2 DEBUG nova.compute.manager [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:58:24 np0005466030 nova_compute[230518]: 2025-10-02 12:58:24.480 2 INFO os_vif [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54')#033[00m
Oct  2 08:58:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:24.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:26 np0005466030 nova_compute[230518]: 2025-10-02 12:58:26.598 2 DEBUG nova.compute.manager [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:26 np0005466030 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG oslo_concurrency.lockutils [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:26 np0005466030 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG oslo_concurrency.lockutils [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:26 np0005466030 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG oslo_concurrency.lockutils [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:26 np0005466030 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG nova.compute.manager [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:26 np0005466030 nova_compute[230518]: 2025-10-02 12:58:26.599 2 WARNING nova.compute.manager [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state rescued and task_state deleting.#033[00m
Oct  2 08:58:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:26.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:27.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:28 np0005466030 nova_compute[230518]: 2025-10-02 12:58:28.838 2 INFO nova.virt.libvirt.driver [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deleting instance files /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_del#033[00m
Oct  2 08:58:28 np0005466030 nova_compute[230518]: 2025-10-02 12:58:28.839 2 INFO nova.virt.libvirt.driver [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deletion of /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_del complete#033[00m
Oct  2 08:58:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:28.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:28 np0005466030 nova_compute[230518]: 2025-10-02 12:58:28.937 2 INFO nova.compute.manager [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 5.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:58:28 np0005466030 nova_compute[230518]: 2025-10-02 12:58:28.937 2 DEBUG oslo.service.loopingcall [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:58:28 np0005466030 nova_compute[230518]: 2025-10-02 12:58:28.938 2 DEBUG nova.compute.manager [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:58:28 np0005466030 nova_compute[230518]: 2025-10-02 12:58:28.938 2 DEBUG nova.network.neutron [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:58:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:29.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:29 np0005466030 nova_compute[230518]: 2025-10-02 12:58:29.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:58:29 np0005466030 nova_compute[230518]: 2025-10-02 12:58:29.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:29 np0005466030 nova_compute[230518]: 2025-10-02 12:58:29.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:30 np0005466030 nova_compute[230518]: 2025-10-02 12:58:30.541 2 DEBUG nova.network.neutron [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:30 np0005466030 nova_compute[230518]: 2025-10-02 12:58:30.555 2 INFO nova.compute.manager [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 1.62 seconds to deallocate network for instance.#033[00m
Oct  2 08:58:30 np0005466030 nova_compute[230518]: 2025-10-02 12:58:30.608 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:30 np0005466030 nova_compute[230518]: 2025-10-02 12:58:30.608 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:30 np0005466030 nova_compute[230518]: 2025-10-02 12:58:30.659 2 DEBUG nova.compute.manager [req-452f91c8-4647-4cec-ac74-7f38f07360ee req-f1ef837c-5c11-4322-861e-3f3fc6454442 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-deleted-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:30 np0005466030 nova_compute[230518]: 2025-10-02 12:58:30.672 2 DEBUG oslo_concurrency.processutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:30.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1911091351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:31 np0005466030 nova_compute[230518]: 2025-10-02 12:58:31.115 2 DEBUG oslo_concurrency.processutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:31 np0005466030 nova_compute[230518]: 2025-10-02 12:58:31.121 2 DEBUG nova.compute.provider_tree [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:58:31 np0005466030 nova_compute[230518]: 2025-10-02 12:58:31.138 2 DEBUG nova.scheduler.client.report [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:58:31 np0005466030 nova_compute[230518]: 2025-10-02 12:58:31.159 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:31 np0005466030 nova_compute[230518]: 2025-10-02 12:58:31.202 2 INFO nova.scheduler.client.report [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Deleted allocations for instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d#033[00m
Oct  2 08:58:31 np0005466030 nova_compute[230518]: 2025-10-02 12:58:31.299 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:32.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:34 np0005466030 nova_compute[230518]: 2025-10-02 12:58:34.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:34 np0005466030 nova_compute[230518]: 2025-10-02 12:58:34.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:34 np0005466030 podman[292904]: 2025-10-02 12:58:34.814079898 +0000 UTC m=+0.064366741 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 08:58:34 np0005466030 podman[292903]: 2025-10-02 12:58:34.87269784 +0000 UTC m=+0.123038845 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 08:58:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:34.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:35 np0005466030 nova_compute[230518]: 2025-10-02 12:58:35.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.691 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.692 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.713 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.809 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.810 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.821 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.822 2 INFO nova.compute.claims [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 08:58:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:36.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:36 np0005466030 nova_compute[230518]: 2025-10-02 12:58:36.972 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:58:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4207214228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.427 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.434 2 DEBUG nova.compute.provider_tree [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.449 2 DEBUG nova.scheduler.client.report [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.470 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.471 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.522 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.522 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.554 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.573 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.673 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.674 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.675 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Creating image(s)#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.697 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.723 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.752 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.757 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.822 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.824 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.824 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.824 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.851 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.856 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:37 np0005466030 nova_compute[230518]: 2025-10-02 12:58:37.907 2 DEBUG nova.policy [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.634 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.727 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.857 2 DEBUG nova.objects.instance [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.877 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.877 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Ensure instance console log exists: /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.878 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.878 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:38 np0005466030 nova_compute[230518]: 2025-10-02 12:58:38.878 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:38.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:39 np0005466030 nova_compute[230518]: 2025-10-02 12:58:39.047 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Successfully created port: bddb6509-7221-4ef0-bde7-be95b89ab6d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 08:58:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:39 np0005466030 nova_compute[230518]: 2025-10-02 12:58:39.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:39 np0005466030 nova_compute[230518]: 2025-10-02 12:58:39.437 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409904.436356, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:39 np0005466030 nova_compute[230518]: 2025-10-02 12:58:39.437 2 INFO nova.compute.manager [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:58:39 np0005466030 nova_compute[230518]: 2025-10-02 12:58:39.453 2 DEBUG nova.compute.manager [None req-7f15eccb-2e3f-47f0-958f-6ff2a325ab48 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:39 np0005466030 nova_compute[230518]: 2025-10-02 12:58:39.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.072 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Successfully updated port: bddb6509-7221-4ef0-bde7-be95b89ab6d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.084 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.084 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.084 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.152 2 DEBUG nova.compute.manager [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.153 2 DEBUG nova.compute.manager [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing instance network info cache due to event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.153 2 DEBUG oslo_concurrency.lockutils [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:40 np0005466030 nova_compute[230518]: 2025-10-02 12:58:40.238 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 08:58:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.234 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.253 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.254 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance network_info: |[{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.254 2 DEBUG oslo_concurrency.lockutils [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.255 2 DEBUG nova.network.neutron [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.260 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start _get_guest_xml network_info=[{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.265 2 WARNING nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.272 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.273 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.282 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.282 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.284 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.285 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.286 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.286 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.287 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.287 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.288 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.288 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.289 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.290 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.290 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.291 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.296 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:58:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2366804538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.763 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.791 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:42 np0005466030 nova_compute[230518]: 2025-10-02 12:58:42.795 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:58:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1199214120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.246 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.248 2 DEBUG nova.virt.libvirt.vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-44339413',display_name='tempest-TestNetworkAdvancedServerOps-server-44339413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-44339413',id=162,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsh1zp1STzuMIXDnbbRAXZcbmmzIocYDU4MIRfUpLuSUtHJodm49lJQYIod0ZNL2zezyn78o0X/6+GzIk9NqxEaJ1JvcNDOKeRMzQvHVSgS3twK5fXwCqcCv0gGhQyYWw==',key_name='tempest-TestNetworkAdvancedServerOps-862557079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-6p95bfrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:37Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=c8cc2f8f-7f89-4304-b071-1849f76cfda8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.249 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.249 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.250 2 DEBUG nova.objects.instance [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.263 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] End _get_guest_xml xml=<domain type="kvm">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <uuid>c8cc2f8f-7f89-4304-b071-1849f76cfda8</uuid>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <name>instance-000000a2</name>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-44339413</nova:name>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 12:58:42</nova:creationTime>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <nova:port uuid="bddb6509-7221-4ef0-bde7-be95b89ab6d8">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <system>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <entry name="serial">c8cc2f8f-7f89-4304-b071-1849f76cfda8</entry>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <entry name="uuid">c8cc2f8f-7f89-4304-b071-1849f76cfda8</entry>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </system>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <os>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  </os>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <features>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  </features>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  </clock>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  <devices>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      </source>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      </auth>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </disk>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:49:02:8c"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <target dev="tapbddb6509-72"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </interface>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/console.log" append="off"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </serial>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <video>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </video>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </rng>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 08:58:43 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 08:58:43 np0005466030 nova_compute[230518]:  </devices>
Oct  2 08:58:43 np0005466030 nova_compute[230518]: </domain>
Oct  2 08:58:43 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.264 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Preparing to wait for external event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.265 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.265 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.265 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.266 2 DEBUG nova.virt.libvirt.vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-44339413',display_name='tempest-TestNetworkAdvancedServerOps-server-44339413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-44339413',id=162,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsh1zp1STzuMIXDnbbRAXZcbmmzIocYDU4MIRfUpLuSUtHJodm49lJQYIod0ZNL2zezyn78o0X/6+GzIk9NqxEaJ1JvcNDOKeRMzQvHVSgS3twK5fXwCqcCv0gGhQyYWw==',key_name='tempest-TestNetworkAdvancedServerOps-862557079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-6p95bfrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:37Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=c8cc2f8f-7f89-4304-b071-1849f76cfda8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.266 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.267 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.267 2 DEBUG os_vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbddb6509-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbddb6509-72, col_values=(('external_ids', {'iface-id': 'bddb6509-7221-4ef0-bde7-be95b89ab6d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:02:8c', 'vm-uuid': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:43 np0005466030 NetworkManager[44960]: <info>  [1759409923.2745] manager: (tapbddb6509-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.282 2 INFO os_vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72')#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.342 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.343 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.343 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:49:02:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.344 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Using config drive#033[00m
Oct  2 08:58:43 np0005466030 nova_compute[230518]: 2025-10-02 12:58:43.379 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:44 np0005466030 nova_compute[230518]: 2025-10-02 12:58:44.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:44 np0005466030 nova_compute[230518]: 2025-10-02 12:58:44.508 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Creating config drive at /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config#033[00m
Oct  2 08:58:44 np0005466030 nova_compute[230518]: 2025-10-02 12:58:44.513 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9qsanb8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:44 np0005466030 nova_compute[230518]: 2025-10-02 12:58:44.664 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9qsanb8" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:44 np0005466030 nova_compute[230518]: 2025-10-02 12:58:44.694 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 08:58:44 np0005466030 nova_compute[230518]: 2025-10-02 12:58:44.698 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:58:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:44.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.047 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.049 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deleting local config drive /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config because it was imported into RBD.#033[00m
Oct  2 08:58:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:45 np0005466030 kernel: tapbddb6509-72: entered promiscuous mode
Oct  2 08:58:45 np0005466030 NetworkManager[44960]: <info>  [1759409925.1323] manager: (tapbddb6509-72): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Oct  2 08:58:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:45Z|00669|binding|INFO|Claiming lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 for this chassis.
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:45Z|00670|binding|INFO|bddb6509-7221-4ef0-bde7-be95b89ab6d8: Claiming fa:16:3e:49:02:8c 10.100.0.11
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.153 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.154 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b bound to our chassis#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.155 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b#033[00m
Oct  2 08:58:45 np0005466030 systemd-machined[188247]: New machine qemu-78-instance-000000a2.
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.173 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e848de92-f665-4678-89ac-6a7462a2ccc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.175 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b820c79-71 in ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.178 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b820c79-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.178 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9532a9-6405-4a53-b184-bfa94e1bbb91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.179 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6905492-f281-47b2-b554-a232d512874a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 systemd[1]: Started Virtual Machine qemu-78-instance-000000a2.
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.189 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f3b3f8-095c-4c27-9d64-ce8a7091b9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 systemd-udevd[293294]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:58:45 np0005466030 NetworkManager[44960]: <info>  [1759409925.2064] device (tapbddb6509-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:58:45 np0005466030 NetworkManager[44960]: <info>  [1759409925.2078] device (tapbddb6509-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5db61-041f-4d30-a858-e47a1f8063b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:45Z|00671|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 ovn-installed in OVS
Oct  2 08:58:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:45Z|00672|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 up in Southbound
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.306 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1908c7b4-1a92-4b41-892c-47e5da70abf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 podman[293273]: 2025-10-02 12:58:45.312108222 +0000 UTC m=+0.150045333 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:58:45 np0005466030 NetworkManager[44960]: <info>  [1759409925.3209] manager: (tap2b820c79-70): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.320 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89af4dda-3ebc-4c3b-bc16-83be397eed5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 podman[293272]: 2025-10-02 12:58:45.349415772 +0000 UTC m=+0.186290040 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.363 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cad3fcc9-3a4d-4f2e-b04e-dfc4905a38e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.367 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4e0d88-da2c-4c34-a558-29bd21dc479c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 NetworkManager[44960]: <info>  [1759409925.3913] device (tap2b820c79-70): carrier: link connected
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.395 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[dded3948-42de-4049-b84f-80d8dc7e6199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.412 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e14bda-a28b-431f-90ee-9696ad26594d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778894, 'reachable_time': 24019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293348, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.427 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e38ce667-3425-41b6-b884-2771ba024686]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:6920'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778894, 'tstamp': 778894}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293349, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.441 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[10d59093-dbf8-465f-99c9-72a2c1ea077f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778894, 'reachable_time': 24019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293350, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.465 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[351cdb5f-9946-4b63-af6a-3e7f36ea9e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.486 2 DEBUG nova.network.neutron [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated VIF entry in instance network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.486 2 DEBUG nova.network.neutron [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.505 2 DEBUG oslo_concurrency.lockutils [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.510 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e03dc3c-093b-4ae4-b0c9-7dc2cc697ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.511 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.511 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.512 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b820c79-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:45 np0005466030 kernel: tap2b820c79-70: entered promiscuous mode
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466030 NetworkManager[44960]: <info>  [1759409925.5153] manager: (tap2b820c79-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.516 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b820c79-70, col_values=(('external_ids', {'iface-id': 'c363aa8d-5657-4504-a1d6-6861ffb1c6b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:45Z|00673|binding|INFO|Releasing lport c363aa8d-5657-4504-a1d6-6861ffb1c6b4 from this chassis (sb_readonly=0)
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.536 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.537 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[07a6d72a-4764-444b-a6dd-7cc65c69ef1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.537 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:58:45 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.538 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'env', 'PROCESS_TAG=haproxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b820c79-77a7-4936-8c6e-9c38d383ad1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.657 2 DEBUG nova.compute.manager [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.658 2 DEBUG oslo_concurrency.lockutils [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.659 2 DEBUG oslo_concurrency.lockutils [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.659 2 DEBUG oslo_concurrency.lockutils [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:45 np0005466030 nova_compute[230518]: 2025-10-02 12:58:45.659 2 DEBUG nova.compute.manager [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Processing event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 08:58:45 np0005466030 podman[293424]: 2025-10-02 12:58:45.90518841 +0000 UTC m=+0.042507492 container create 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 08:58:45 np0005466030 systemd[1]: Started libpod-conmon-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29.scope.
Oct  2 08:58:45 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:58:45 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5537c82eea9d782306a87a5e6ec1b0062bd7bca80ec083c7e105f886bdbc601b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:58:45 np0005466030 podman[293424]: 2025-10-02 12:58:45.880807353 +0000 UTC m=+0.018126455 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:58:45 np0005466030 podman[293424]: 2025-10-02 12:58:45.993377531 +0000 UTC m=+0.130696613 container init 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 08:58:45 np0005466030 podman[293424]: 2025-10-02 12:58:45.998909692 +0000 UTC m=+0.136228774 container start 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 08:58:46 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : New worker (293446) forked
Oct  2 08:58:46 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : Loading success.
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.190 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.192 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409926.1914358, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.192 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Started (Lifecycle Event)#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.196 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.200 2 INFO nova.virt.libvirt.driver [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance spawned successfully.#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.200 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.228 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.235 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.239 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.239 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.239 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.240 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.241 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.241 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.268 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.268 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409926.19153, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.269 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Paused (Lifecycle Event)#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.298 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.302 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409926.1951358, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.303 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.311 2 INFO nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 8.64 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.311 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.322 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.325 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.355 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.386 2 INFO nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 9.63 seconds to build instance.#033[00m
Oct  2 08:58:46 np0005466030 nova_compute[230518]: 2025-10-02 12:58:46.406 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:46.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:47 np0005466030 nova_compute[230518]: 2025-10-02 12:58:47.756 2 DEBUG nova.compute.manager [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:47 np0005466030 nova_compute[230518]: 2025-10-02 12:58:47.756 2 DEBUG oslo_concurrency.lockutils [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:58:47 np0005466030 nova_compute[230518]: 2025-10-02 12:58:47.757 2 DEBUG oslo_concurrency.lockutils [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:58:47 np0005466030 nova_compute[230518]: 2025-10-02 12:58:47.757 2 DEBUG oslo_concurrency.lockutils [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:58:47 np0005466030 nova_compute[230518]: 2025-10-02 12:58:47.757 2 DEBUG nova.compute.manager [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:58:47 np0005466030 nova_compute[230518]: 2025-10-02 12:58:47.757 2 WARNING nova.compute.manager [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:58:48 np0005466030 nova_compute[230518]: 2025-10-02 12:58:48.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:49 np0005466030 nova_compute[230518]: 2025-10-02 12:58:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:50.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:51 np0005466030 nova_compute[230518]: 2025-10-02 12:58:51.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:51 np0005466030 NetworkManager[44960]: <info>  [1759409931.5830] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Oct  2 08:58:51 np0005466030 NetworkManager[44960]: <info>  [1759409931.5846] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Oct  2 08:58:51 np0005466030 nova_compute[230518]: 2025-10-02 12:58:51.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:51 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:51Z|00674|binding|INFO|Releasing lport c363aa8d-5657-4504-a1d6-6861ffb1c6b4 from this chassis (sb_readonly=0)
Oct  2 08:58:51 np0005466030 nova_compute[230518]: 2025-10-02 12:58:51.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:52 np0005466030 nova_compute[230518]: 2025-10-02 12:58:52.011 2 DEBUG nova.compute.manager [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:58:52 np0005466030 nova_compute[230518]: 2025-10-02 12:58:52.011 2 DEBUG nova.compute.manager [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing instance network info cache due to event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:58:52 np0005466030 nova_compute[230518]: 2025-10-02 12:58:52.011 2 DEBUG oslo_concurrency.lockutils [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:58:52 np0005466030 nova_compute[230518]: 2025-10-02 12:58:52.012 2 DEBUG oslo_concurrency.lockutils [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:58:52 np0005466030 nova_compute[230518]: 2025-10-02 12:58:52.012 2 DEBUG nova.network.neutron [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:58:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:52.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:53 np0005466030 nova_compute[230518]: 2025-10-02 12:58:53.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:54 np0005466030 nova_compute[230518]: 2025-10-02 12:58:54.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:54 np0005466030 nova_compute[230518]: 2025-10-02 12:58:54.692 2 DEBUG nova.network.neutron [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated VIF entry in instance network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:58:54 np0005466030 nova_compute[230518]: 2025-10-02 12:58:54.693 2 DEBUG nova.network.neutron [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:58:54 np0005466030 nova_compute[230518]: 2025-10-02 12:58:54.739 2 DEBUG oslo_concurrency.lockutils [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:58:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:54.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:56.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:57.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:58 np0005466030 nova_compute[230518]: 2025-10-02 12:58:58.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:58:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:58.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:58:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:58:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:58:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:59.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:58:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:59.311 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:58:59 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:58:59.312 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 08:58:59 np0005466030 nova_compute[230518]: 2025-10-02 12:58:59.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:58:59 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:59Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:02:8c 10.100.0.11
Oct  2 08:58:59 np0005466030 ovn_controller[129257]: 2025-10-02T12:58:59Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:02:8c 10.100.0.11
Oct  2 08:58:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:58:59 np0005466030 nova_compute[230518]: 2025-10-02 12:58:59.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:00.315 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:02.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:03 np0005466030 nova_compute[230518]: 2025-10-02 12:59:03.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 08:59:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:04 np0005466030 nova_compute[230518]: 2025-10-02 12:59:04.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:05 np0005466030 podman[293708]: 2025-10-02 12:59:05.807624471 +0000 UTC m=+0.057812877 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 08:59:05 np0005466030 podman[293707]: 2025-10-02 12:59:05.873086016 +0000 UTC m=+0.124842111 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.312 2 INFO nova.compute.manager [None req-187533de-d867-4c1d-94a7-99d8ae82b7b5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Get console output#033[00m
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.317 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.637 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.638 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.638 2 INFO nova.compute.manager [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Rebooting instance#033[00m
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.662 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.663 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:06 np0005466030 nova_compute[230518]: 2025-10-02 12:59:06.664 2 DEBUG nova.network.neutron [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 08:59:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:59:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:59:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:07.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:08 np0005466030 nova_compute[230518]: 2025-10-02 12:59:08.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:09.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:09 np0005466030 nova_compute[230518]: 2025-10-02 12:59:09.176 2 DEBUG nova.network.neutron [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:09 np0005466030 nova_compute[230518]: 2025-10-02 12:59:09.211 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:09 np0005466030 nova_compute[230518]: 2025-10-02 12:59:09.212 2 DEBUG nova.compute.manager [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:09 np0005466030 nova_compute[230518]: 2025-10-02 12:59:09.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:10.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.114 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.114 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.115 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.115 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.115 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3992647069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.572 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.687 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.870 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.871 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4103MB free_disk=20.784870147705078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 08:59:12 np0005466030 kernel: tapbddb6509-72 (unregistering): left promiscuous mode
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.872 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.872 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:12 np0005466030 NetworkManager[44960]: <info>  [1759409952.8768] device (tapbddb6509-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:59:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:12Z|00675|binding|INFO|Releasing lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 from this chassis (sb_readonly=0)
Oct  2 08:59:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:12Z|00676|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 down in Southbound
Oct  2 08:59:12 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:12Z|00677|binding|INFO|Removing iface tapbddb6509-72 ovn-installed in OVS
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466030 nova_compute[230518]: 2025-10-02 12:59:12.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:12 np0005466030 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Oct  2 08:59:12 np0005466030 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a2.scope: Consumed 13.810s CPU time.
Oct  2 08:59:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:12 np0005466030 systemd-machined[188247]: Machine qemu-78-instance-000000a2 terminated.
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.001 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.003 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b unbound from our chassis#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.005 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.005 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c9643469-45d9-4df8-bcc5-4df6db2a4808]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.007 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace which is not needed anymore#033[00m
Oct  2 08:59:13 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : haproxy version is 2.8.14-c23fe91
Oct  2 08:59:13 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : path to executable is /usr/sbin/haproxy
Oct  2 08:59:13 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [WARNING]  (293444) : Exiting Master process...
Oct  2 08:59:13 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [WARNING]  (293444) : Exiting Master process...
Oct  2 08:59:13 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [ALERT]    (293444) : Current worker (293446) exited with code 143 (Terminated)
Oct  2 08:59:13 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [WARNING]  (293444) : All workers exited. Exiting... (0)
Oct  2 08:59:13 np0005466030 systemd[1]: libpod-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29.scope: Deactivated successfully.
Oct  2 08:59:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:13.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:13 np0005466030 podman[293795]: 2025-10-02 12:59:13.124471834 +0000 UTC m=+0.040770998 container died 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.129 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c8cc2f8f-7f89-4304-b071-1849f76cfda8 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.129 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.130 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29-userdata-shm.mount: Deactivated successfully.
Oct  2 08:59:13 np0005466030 systemd[1]: var-lib-containers-storage-overlay-5537c82eea9d782306a87a5e6ec1b0062bd7bca80ec083c7e105f886bdbc601b-merged.mount: Deactivated successfully.
Oct  2 08:59:13 np0005466030 podman[293795]: 2025-10-02 12:59:13.169749161 +0000 UTC m=+0.086048325 container cleanup 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 08:59:13 np0005466030 systemd[1]: libpod-conmon-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29.scope: Deactivated successfully.
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.235 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:13 np0005466030 podman[293835]: 2025-10-02 12:59:13.237560097 +0000 UTC m=+0.045334469 container remove 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.243 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b95c9f4-82f3-48cc-ac8d-9e9c125293ac]: (4, ('Thu Oct  2 12:59:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29)\n371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29\nThu Oct  2 12:59:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29)\n371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.245 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[97224cb4-f147-45aa-8dfb-bddf83a59047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.246 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:13 np0005466030 kernel: tap2b820c79-70: left promiscuous mode
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.267 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[59e6ba71-0143-440a-bc1d-db13e4e0a11a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.301 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[724677f0-7c17-4ce8-a612-35b026dca304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.302 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2cf049-e3bf-4e4b-9f31-be7ad7071d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.318 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8eca367-90d6-4044-b0bd-192918799ac0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778882, 'reachable_time': 30508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293852, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 systemd[1]: run-netns-ovnmeta\x2d2b820c79\x2d77a7\x2d4936\x2d8c6e\x2d9c38d383ad1b.mount: Deactivated successfully.
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.324 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.324 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[91857656-a42c-49b3-ac98-0f96bfdd07b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.375 2 INFO nova.virt.libvirt.driver [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance shutdown successfully.#033[00m
Oct  2 08:59:13 np0005466030 kernel: tapbddb6509-72: entered promiscuous mode
Oct  2 08:59:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:13Z|00678|binding|INFO|Claiming lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 for this chassis.
Oct  2 08:59:13 np0005466030 NetworkManager[44960]: <info>  [1759409953.4475] manager: (tapbddb6509-72): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:13Z|00679|binding|INFO|bddb6509-7221-4ef0-bde7-be95b89ab6d8: Claiming fa:16:3e:49:02:8c 10.100.0.11
Oct  2 08:59:13 np0005466030 systemd-udevd[293774]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 08:59:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:13Z|00680|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 ovn-installed in OVS
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 NetworkManager[44960]: <info>  [1759409953.4732] device (tapbddb6509-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 08:59:13 np0005466030 NetworkManager[44960]: <info>  [1759409953.4758] device (tapbddb6509-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 08:59:13 np0005466030 systemd-machined[188247]: New machine qemu-79-instance-000000a2.
Oct  2 08:59:13 np0005466030 systemd[1]: Started Virtual Machine qemu-79-instance-000000a2.
Oct  2 08:59:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:13Z|00681|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 up in Southbound
Oct  2 08:59:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 08:59:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1424863242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.590 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.592 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b bound to our chassis#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.596 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.609 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba255965-eb54-42c2-a6af-4c3b0f4b2aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.611 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b820c79-71 in ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.614 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b820c79-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.614 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2356538-7b6a-40ec-9039-981163b18c1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.616 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8dfa02-d9cb-4ae9-ae6d-7b0c86c6ed09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.639 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1effdb09-40fb-4756-b0d9-82ee7323e08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/779617749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.677 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[37cb2966-f7d7-4d12-977c-8eaca638a677]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.684 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.691 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.709 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3403e3-c3b1-43d6-9fa4-94d024b0290f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.715 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d574dd1d-b7c3-47d0-832e-677afaa79609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 NetworkManager[44960]: <info>  [1759409953.7164] manager: (tap2b820c79-70): new Veth device (/org/freedesktop/NetworkManager/Devices/318)
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.720 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.762 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4979d423-5c29-4123-a04d-c44c124ea7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.768 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6c19caea-9035-46d6-9123-6e22592215b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 NetworkManager[44960]: <info>  [1759409953.7939] device (tap2b820c79-70): carrier: link connected
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.799 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d8daa9-e187-4b5f-ab03-38ec57257b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.820 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d387bda-e7f7-4d8f-8cdc-79259e914b95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781735, 'reachable_time': 33153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293920, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.837 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[626c8597-92f9-4e87-9a7c-fd68679bfef9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:6920'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 781735, 'tstamp': 781735}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293921, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.857 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea659def-99a0-4c12-8446-5a73873a9908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781735, 'reachable_time': 33153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293922, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.864 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.864 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0279072-96ef-4e6c-912b-5f945c930fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.956 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba565d7-522f-4d96-8a9f-71895e201435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.958 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.958 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.959 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b820c79-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:13 np0005466030 kernel: tap2b820c79-70: entered promiscuous mode
Oct  2 08:59:13 np0005466030 NetworkManager[44960]: <info>  [1759409953.9625] manager: (tap2b820c79-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.972 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b820c79-70, col_values=(('external_ids', {'iface-id': 'c363aa8d-5657-4504-a1d6-6861ffb1c6b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:13 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:13Z|00682|binding|INFO|Releasing lport c363aa8d-5657-4504-a1d6-6861ffb1c6b4 from this chassis (sb_readonly=0)
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.977 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.978 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0e22b84c-f55b-4e04-a2fd-ccc4677f745b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.979 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 08:59:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.981 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'env', 'PROCESS_TAG=haproxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b820c79-77a7-4936-8c6e-9c38d383ad1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 08:59:13 np0005466030 nova_compute[230518]: 2025-10-02 12:59:13.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:14 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 08:59:14 np0005466030 podman[294004]: 2025-10-02 12:59:14.390957774 +0000 UTC m=+0.061205802 container create 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:59:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:14 np0005466030 systemd[1]: Started libpod-conmon-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0.scope.
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.448 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.449 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.449 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.450 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.450 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.450 2 WARNING nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state reboot_started.#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.450 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:14 np0005466030 podman[294004]: 2025-10-02 12:59:14.360207499 +0000 UTC m=+0.030455597 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.452 2 WARNING nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state reboot_started.#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.452 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:14 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.452 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.452 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.453 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.453 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:14 np0005466030 nova_compute[230518]: 2025-10-02 12:59:14.453 2 WARNING nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state reboot_started.#033[00m
Oct  2 08:59:14 np0005466030 systemd[1]: Started libcrun container.
Oct  2 08:59:14 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b1c7c3db0c4c0309707ba6b2e6ca3059d313e0d06a5d2858650b6979b1b343/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 08:59:14 np0005466030 podman[294004]: 2025-10-02 12:59:14.488210066 +0000 UTC m=+0.158458124 container init 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  2 08:59:14 np0005466030 podman[294004]: 2025-10-02 12:59:14.496131203 +0000 UTC m=+0.166379221 container start 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:59:14 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : New worker (294026) forked
Oct  2 08:59:14 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : Loading success.
Oct  2 08:59:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.143 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c8cc2f8f-7f89-4304-b071-1849f76cfda8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.144 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409955.1423707, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.144 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Resumed (Lifecycle Event)#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.152 2 INFO nova.virt.libvirt.driver [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance running successfully.#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.153 2 INFO nova.virt.libvirt.driver [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance soft rebooted successfully.#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.153 2 DEBUG nova.compute.manager [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.524 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.528 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.577 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.577 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409955.1425006, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.578 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Started (Lifecycle Event)#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.613 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.624 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.629 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 08:59:15 np0005466030 podman[294077]: 2025-10-02 12:59:15.809246643 +0000 UTC m=+0.057345734 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 08:59:15 np0005466030 podman[294078]: 2025-10-02 12:59:15.81466739 +0000 UTC m=+0.062900244 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:59:15 np0005466030 nova_compute[230518]: 2025-10-02 12:59:15.859 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.612 2 DEBUG nova.compute.manager [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.613 2 DEBUG oslo_concurrency.lockutils [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.613 2 DEBUG oslo_concurrency.lockutils [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.614 2 DEBUG oslo_concurrency.lockutils [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.614 2 DEBUG nova.compute.manager [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:16 np0005466030 nova_compute[230518]: 2025-10-02 12:59:16.614 2 WARNING nova.compute.manager [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state None.#033[00m
Oct  2 08:59:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:59:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:16.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:59:17 np0005466030 nova_compute[230518]: 2025-10-02 12:59:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:18 np0005466030 nova_compute[230518]: 2025-10-02 12:59:18.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:59:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:18.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:59:19 np0005466030 nova_compute[230518]: 2025-10-02 12:59:19.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:19 np0005466030 nova_compute[230518]: 2025-10-02 12:59:19.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:19.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:19 np0005466030 nova_compute[230518]: 2025-10-02 12:59:19.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:22 np0005466030 nova_compute[230518]: 2025-10-02 12:59:22.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:23.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:23 np0005466030 nova_compute[230518]: 2025-10-02 12:59:23.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:24 np0005466030 nova_compute[230518]: 2025-10-02 12:59:24.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:25 np0005466030 nova_compute[230518]: 2025-10-02 12:59:25.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 08:59:25 np0005466030 nova_compute[230518]: 2025-10-02 12:59:25.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 08:59:25 np0005466030 nova_compute[230518]: 2025-10-02 12:59:25.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 08:59:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 08:59:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:25.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 08:59:25 np0005466030 nova_compute[230518]: 2025-10-02 12:59:25.804 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:25 np0005466030 nova_compute[230518]: 2025-10-02 12:59:25.805 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:25 np0005466030 nova_compute[230518]: 2025-10-02 12:59:25.805 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 08:59:25 np0005466030 nova_compute[230518]: 2025-10-02 12:59:25.806 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:25.956 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:25.958 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:27.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:27 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:27Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:02:8c 10.100.0.11
Oct  2 08:59:28 np0005466030 nova_compute[230518]: 2025-10-02 12:59:28.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:28 np0005466030 nova_compute[230518]: 2025-10-02 12:59:28.891 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:29.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:29 np0005466030 nova_compute[230518]: 2025-10-02 12:59:29.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:29 np0005466030 nova_compute[230518]: 2025-10-02 12:59:29.825 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:29 np0005466030 nova_compute[230518]: 2025-10-02 12:59:29.826 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 08:59:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:30.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:33 np0005466030 nova_compute[230518]: 2025-10-02 12:59:33.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:33 np0005466030 nova_compute[230518]: 2025-10-02 12:59:33.494 2 INFO nova.compute.manager [None req-323ade18-e55c-483d-bcf3-66c5226433e2 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Get console output#033[00m
Oct  2 08:59:33 np0005466030 nova_compute[230518]: 2025-10-02 12:59:33.502 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 08:59:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:34 np0005466030 nova_compute[230518]: 2025-10-02 12:59:34.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.231 2 DEBUG nova.compute.manager [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.232 2 DEBUG nova.compute.manager [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing instance network info cache due to event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.232 2 DEBUG oslo_concurrency.lockutils [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.233 2 DEBUG oslo_concurrency.lockutils [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.233 2 DEBUG nova.network.neutron [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.459 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.460 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.460 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.460 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.461 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.462 2 INFO nova.compute.manager [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Terminating instance#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.464 2 DEBUG nova.compute.manager [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 08:59:35 np0005466030 kernel: tapbddb6509-72 (unregistering): left promiscuous mode
Oct  2 08:59:35 np0005466030 NetworkManager[44960]: <info>  [1759409975.6539] device (tapbddb6509-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:35Z|00683|binding|INFO|Releasing lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 from this chassis (sb_readonly=0)
Oct  2 08:59:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:35Z|00684|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 down in Southbound
Oct  2 08:59:35 np0005466030 ovn_controller[129257]: 2025-10-02T12:59:35Z|00685|binding|INFO|Removing iface tapbddb6509-72 ovn-installed in OVS
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.676 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 08:59:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.678 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b unbound from our chassis#033[00m
Oct  2 08:59:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.682 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 08:59:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.683 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[88e440d0-29b2-4578-b8fe-40864e1b85d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.684 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace which is not needed anymore#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466030 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Oct  2 08:59:35 np0005466030 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a2.scope: Consumed 13.552s CPU time.
Oct  2 08:59:35 np0005466030 systemd-machined[188247]: Machine qemu-79-instance-000000a2 terminated.
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.908 2 INFO nova.virt.libvirt.driver [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance destroyed successfully.#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.909 2 DEBUG nova.objects.instance [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.931 2 DEBUG nova.virt.libvirt.vif [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-44339413',display_name='tempest-TestNetworkAdvancedServerOps-server-44339413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-44339413',id=162,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsh1zp1STzuMIXDnbbRAXZcbmmzIocYDU4MIRfUpLuSUtHJodm49lJQYIod0ZNL2zezyn78o0X/6+GzIk9NqxEaJ1JvcNDOKeRMzQvHVSgS3twK5fXwCqcCv0gGhQyYWw==',key_name='tempest-TestNetworkAdvancedServerOps-862557079',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-6p95bfrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:15Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=c8cc2f8f-7f89-4304-b071-1849f76cfda8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.931 2 DEBUG nova.network.os_vif_util [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.932 2 DEBUG nova.network.os_vif_util [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.932 2 DEBUG os_vif [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbddb6509-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:35 np0005466030 nova_compute[230518]: 2025-10-02 12:59:35.942 2 INFO os_vif [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72')#033[00m
Oct  2 08:59:36 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : haproxy version is 2.8.14-c23fe91
Oct  2 08:59:36 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : path to executable is /usr/sbin/haproxy
Oct  2 08:59:36 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [WARNING]  (294024) : Exiting Master process...
Oct  2 08:59:36 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [ALERT]    (294024) : Current worker (294026) exited with code 143 (Terminated)
Oct  2 08:59:36 np0005466030 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [WARNING]  (294024) : All workers exited. Exiting... (0)
Oct  2 08:59:36 np0005466030 systemd[1]: libpod-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0.scope: Deactivated successfully.
Oct  2 08:59:36 np0005466030 podman[294140]: 2025-10-02 12:59:36.217322666 +0000 UTC m=+0.412179338 container died 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 08:59:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0-userdata-shm.mount: Deactivated successfully.
Oct  2 08:59:36 np0005466030 systemd[1]: var-lib-containers-storage-overlay-e2b1c7c3db0c4c0309707ba6b2e6ca3059d313e0d06a5d2858650b6979b1b343-merged.mount: Deactivated successfully.
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.853 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.853 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.853 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 08:59:36 np0005466030 nova_compute[230518]: 2025-10-02 12:59:36.855 2 WARNING nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 08:59:36 np0005466030 podman[294171]: 2025-10-02 12:59:36.870898874 +0000 UTC m=+0.891271685 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 08:59:36 np0005466030 podman[294163]: 2025-10-02 12:59:36.903122084 +0000 UTC m=+0.930751960 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 08:59:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:37 np0005466030 nova_compute[230518]: 2025-10-02 12:59:37.047 2 DEBUG nova.network.neutron [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated VIF entry in instance network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 08:59:37 np0005466030 nova_compute[230518]: 2025-10-02 12:59:37.048 2 DEBUG nova.network.neutron [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:37 np0005466030 nova_compute[230518]: 2025-10-02 12:59:37.091 2 DEBUG oslo_concurrency.lockutils [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 08:59:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:37.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:37 np0005466030 podman[294140]: 2025-10-02 12:59:37.468980358 +0000 UTC m=+1.663837010 container cleanup 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 08:59:37 np0005466030 systemd[1]: libpod-conmon-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0.scope: Deactivated successfully.
Oct  2 08:59:37 np0005466030 podman[294243]: 2025-10-02 12:59:37.914928813 +0000 UTC m=+0.426880614 container remove 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.921 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9a502a36-79a0-4f1a-a952-05d65f6f6548]: (4, ('Thu Oct  2 12:59:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0)\n6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0\nThu Oct  2 12:59:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0)\n6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.922 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1f43f0f9-4cca-461d-bca9-496bda1f9034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.923 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 08:59:37 np0005466030 kernel: tap2b820c79-70: left promiscuous mode
Oct  2 08:59:37 np0005466030 nova_compute[230518]: 2025-10-02 12:59:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:37 np0005466030 nova_compute[230518]: 2025-10-02 12:59:37.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.942 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d32dca6-ce35-4dea-9d03-4d384000b512]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.973 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72a73ca8-dd36-44dd-81d4-dd0ab514a54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.974 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fa63e5df-e3c0-4c25-bcba-655407db4529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.987 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[746e33e8-d232-4b98-9c63-c95c4ac74c33]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781726, 'reachable_time': 35140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294261, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.990 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 08:59:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.990 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[980ba492-04a9-48f4-be33-58039ce453bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 08:59:37 np0005466030 systemd[1]: run-netns-ovnmeta\x2d2b820c79\x2d77a7\x2d4936\x2d8c6e\x2d9c38d383ad1b.mount: Deactivated successfully.
Oct  2 08:59:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:38.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:39 np0005466030 nova_compute[230518]: 2025-10-02 12:59:39.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:39 np0005466030 nova_compute[230518]: 2025-10-02 12:59:39.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:39 np0005466030 nova_compute[230518]: 2025-10-02 12:59:39.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:40 np0005466030 nova_compute[230518]: 2025-10-02 12:59:40.106 2 INFO nova.virt.libvirt.driver [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deleting instance files /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8_del#033[00m
Oct  2 08:59:40 np0005466030 nova_compute[230518]: 2025-10-02 12:59:40.107 2 INFO nova.virt.libvirt.driver [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deletion of /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8_del complete#033[00m
Oct  2 08:59:40 np0005466030 nova_compute[230518]: 2025-10-02 12:59:40.206 2 INFO nova.compute.manager [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 4.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 08:59:40 np0005466030 nova_compute[230518]: 2025-10-02 12:59:40.206 2 DEBUG oslo.service.loopingcall [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 08:59:40 np0005466030 nova_compute[230518]: 2025-10-02 12:59:40.206 2 DEBUG nova.compute.manager [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 08:59:40 np0005466030 nova_compute[230518]: 2025-10-02 12:59:40.206 2 DEBUG nova.network.neutron [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 08:59:40 np0005466030 nova_compute[230518]: 2025-10-02 12:59:40.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:42 np0005466030 nova_compute[230518]: 2025-10-02 12:59:42.329 2 DEBUG nova.network.neutron [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 08:59:42 np0005466030 nova_compute[230518]: 2025-10-02 12:59:42.369 2 INFO nova.compute.manager [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 2.16 seconds to deallocate network for instance.#033[00m
Oct  2 08:59:42 np0005466030 nova_compute[230518]: 2025-10-02 12:59:42.435 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 08:59:42 np0005466030 nova_compute[230518]: 2025-10-02 12:59:42.435 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 08:59:42 np0005466030 nova_compute[230518]: 2025-10-02 12:59:42.488 2 DEBUG nova.compute.manager [req-fadd172d-00c5-4efd-a04c-34a02ce2acce req-31e59711-0789-49d9-a930-86ec4d4d5f1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-deleted-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 08:59:42 np0005466030 nova_compute[230518]: 2025-10-02 12:59:42.500 2 DEBUG oslo_concurrency.processutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 08:59:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:42.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 08:59:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3512397272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 08:59:43 np0005466030 nova_compute[230518]: 2025-10-02 12:59:43.032 2 DEBUG oslo_concurrency.processutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 08:59:43 np0005466030 nova_compute[230518]: 2025-10-02 12:59:43.041 2 DEBUG nova.compute.provider_tree [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 08:59:43 np0005466030 nova_compute[230518]: 2025-10-02 12:59:43.067 2 DEBUG nova.scheduler.client.report [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 08:59:43 np0005466030 nova_compute[230518]: 2025-10-02 12:59:43.129 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:43 np0005466030 nova_compute[230518]: 2025-10-02 12:59:43.185 2 INFO nova.scheduler.client.report [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance c8cc2f8f-7f89-4304-b071-1849f76cfda8#033[00m
Oct  2 08:59:43 np0005466030 nova_compute[230518]: 2025-10-02 12:59:43.366 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 08:59:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:44 np0005466030 nova_compute[230518]: 2025-10-02 12:59:44.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:45.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:45 np0005466030 nova_compute[230518]: 2025-10-02 12:59:45.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:46 np0005466030 podman[294286]: 2025-10-02 12:59:46.836832066 +0000 UTC m=+0.086497279 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct  2 08:59:46 np0005466030 podman[294287]: 2025-10-02 12:59:46.843592856 +0000 UTC m=+0.088071357 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 08:59:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:49.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:49 np0005466030 nova_compute[230518]: 2025-10-02 12:59:49.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:50 np0005466030 nova_compute[230518]: 2025-10-02 12:59:50.907 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409975.9059489, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 08:59:50 np0005466030 nova_compute[230518]: 2025-10-02 12:59:50.907 2 INFO nova.compute.manager [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Stopped (Lifecycle Event)#033[00m
Oct  2 08:59:50 np0005466030 nova_compute[230518]: 2025-10-02 12:59:50.931 2 DEBUG nova.compute.manager [None req-734be1ab-a321-43c2-8556-1c34453f4bf0 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 08:59:50 np0005466030 nova_compute[230518]: 2025-10-02 12:59:50.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:51.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:53.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:53.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:54 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct  2 08:59:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:54 np0005466030 nova_compute[230518]: 2025-10-02 12:59:54.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 08:59:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:55.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 08:59:55 np0005466030 nova_compute[230518]: 2025-10-02 12:59:55.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 08:59:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:57.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:57.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:59.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 08:59:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 08:59:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 08:59:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 08:59:59 np0005466030 nova_compute[230518]: 2025-10-02 12:59:59.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 09:00:00 np0005466030 nova_compute[230518]: 2025-10-02 13:00:00.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:01.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:01.107 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:01 np0005466030 nova_compute[230518]: 2025-10-02 13:00:01.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:01.109 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:00:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:01.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:00:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:03.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:00:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:04.112 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:04 np0005466030 nova_compute[230518]: 2025-10-02 13:00:04.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:05.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:05 np0005466030 nova_compute[230518]: 2025-10-02 13:00:05.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.316 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.317 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.347 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.425 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.426 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.433 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.433 2 INFO nova.compute.claims [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:00:06 np0005466030 nova_compute[230518]: 2025-10-02 13:00:06.619 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:00:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:00:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1204448448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.154 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.162 2 DEBUG nova.compute.provider_tree [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.176 2 DEBUG nova.scheduler.client.report [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.201 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.202 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:00:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.249 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.249 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.267 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.284 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.370 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.371 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.372 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Creating image(s)#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.402 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.439 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.471 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.476 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.549 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.551 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.552 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.552 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.575 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.578 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:07 np0005466030 podman[294444]: 2025-10-02 13:00:07.824334854 +0000 UTC m=+0.066880459 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct  2 09:00:07 np0005466030 podman[294443]: 2025-10-02 13:00:07.850646282 +0000 UTC m=+0.096251952 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 09:00:07 np0005466030 nova_compute[230518]: 2025-10-02 13:00:07.998 2 DEBUG nova.policy [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.110 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.221 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.490 2 DEBUG nova.objects.instance [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 11d532af-2778-4065-8cf4-f2f53d3dbb1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.508 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.508 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Ensure instance console log exists: /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.509 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.509 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:08 np0005466030 nova_compute[230518]: 2025-10-02 13:00:08.509 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:09 np0005466030 nova_compute[230518]: 2025-10-02 13:00:09.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.031 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Successfully updated port: 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.046 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.046 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.046 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.162 2 DEBUG nova.compute.manager [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.163 2 DEBUG nova.compute.manager [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing instance network info cache due to event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.163 2 DEBUG oslo_concurrency.lockutils [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.582 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:00:10 np0005466030 nova_compute[230518]: 2025-10-02 13:00:10.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.634 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.670 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.671 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance network_info: |[{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.671 2 DEBUG oslo_concurrency.lockutils [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.671 2 DEBUG nova.network.neutron [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.674 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start _get_guest_xml network_info=[{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.679 2 WARNING nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.683 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.684 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.688 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.688 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.690 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.691 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.691 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.692 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.692 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.693 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.693 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.694 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.694 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.695 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.695 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.695 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:00:11 np0005466030 nova_compute[230518]: 2025-10-02 13:00:11.700 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.084 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.084 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3162448280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.130 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.166 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.170 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/376091757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/604262924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.579 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.582 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.583 2 DEBUG nova.virt.libvirt.vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-109200331',display_name='tempest-TestNetworkBasicOps-server-109200331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-109200331',id=164,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyVsgu/K1Be3T6lU80AA5key24FWBwYCD9vm40G5BNpftGNMWHfF5cy51qUgLzMoZT/j2dR3TucUmm1S5UEzlRAZFpDfO//FNaDZljlZXVXY30xYKCpB4GbuFcySY9mIg==',key_name='tempest-TestNetworkBasicOps-433983614',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-6r3anh17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:07Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=11d532af-2778-4065-8cf4-f2f53d3dbb1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.584 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.584 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.585 2 DEBUG nova.objects.instance [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11d532af-2778-4065-8cf4-f2f53d3dbb1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.647 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <uuid>11d532af-2778-4065-8cf4-f2f53d3dbb1c</uuid>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <name>instance-000000a4</name>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkBasicOps-server-109200331</nova:name>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:00:11</nova:creationTime>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <nova:port uuid="6b57c5d9-dd16-427a-85c2-c02dedb41e29">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <entry name="serial">11d532af-2778-4065-8cf4-f2f53d3dbb1c</entry>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <entry name="uuid">11d532af-2778-4065-8cf4-f2f53d3dbb1c</entry>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:6b:44:86"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <target dev="tap6b57c5d9-dd"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/console.log" append="off"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:00:12 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:00:12 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:00:12 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:00:12 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.648 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Preparing to wait for external event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.649 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.649 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.650 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.651 2 DEBUG nova.virt.libvirt.vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-109200331',display_name='tempest-TestNetworkBasicOps-server-109200331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-109200331',id=164,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyVsgu/K1Be3T6lU80AA5key24FWBwYCD9vm40G5BNpftGNMWHfF5cy51qUgLzMoZT/j2dR3TucUmm1S5UEzlRAZFpDfO//FNaDZljlZXVXY30xYKCpB4GbuFcySY9mIg==',key_name='tempest-TestNetworkBasicOps-433983614',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-6r3anh17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:07Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=11d532af-2778-4065-8cf4-f2f53d3dbb1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.652 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.653 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.654 2 DEBUG os_vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b57c5d9-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b57c5d9-dd, col_values=(('external_ids', {'iface-id': '6b57c5d9-dd16-427a-85c2-c02dedb41e29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:44:86', 'vm-uuid': '11d532af-2778-4065-8cf4-f2f53d3dbb1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:12 np0005466030 NetworkManager[44960]: <info>  [1759410012.6674] manager: (tap6b57c5d9-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.673 2 INFO os_vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd')#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.730 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.730 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.731 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:6b:44:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.732 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Using config drive#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.766 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.857 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.858 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4291MB free_disk=20.921802520751953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.858 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.859 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.941 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 11d532af-2778-4065-8cf4-f2f53d3dbb1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.942 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.942 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:00:12 np0005466030 nova_compute[230518]: 2025-10-02 13:00:12.989 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2870876322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.439 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.448 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.487 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.529 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.530 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.535 2 DEBUG nova.network.neutron [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updated VIF entry in instance network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.536 2 DEBUG nova.network.neutron [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.569 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Creating config drive at /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.578 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpauotot11 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.628 2 DEBUG oslo_concurrency.lockutils [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.741 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpauotot11" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.790 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:13 np0005466030 nova_compute[230518]: 2025-10-02 13:00:13.797 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.009 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.010 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deleting local config drive /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config because it was imported into RBD.#033[00m
Oct  2 09:00:14 np0005466030 kernel: tap6b57c5d9-dd: entered promiscuous mode
Oct  2 09:00:14 np0005466030 NetworkManager[44960]: <info>  [1759410014.0799] manager: (tap6b57c5d9-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Oct  2 09:00:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:14Z|00686|binding|INFO|Claiming lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 for this chassis.
Oct  2 09:00:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:14Z|00687|binding|INFO|6b57c5d9-dd16-427a-85c2-c02dedb41e29: Claiming fa:16:3e:6b:44:86 10.100.0.10
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.093 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:44:86 10.100.0.10'], port_security=['fa:16:3e:6b:44:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '11d532af-2778-4065-8cf4-f2f53d3dbb1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a001cef-b85b-4c88-a329-8db2a6ee024d, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6b57c5d9-dd16-427a-85c2-c02dedb41e29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.095 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 in datapath 2dacd3c2-a76f-4896-a922-fdbbab78ce12 bound to our chassis#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.097 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2dacd3c2-a76f-4896-a922-fdbbab78ce12#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c347a301-81b5-4e7a-b74b-03f2c0d7ff55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.111 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2dacd3c2-a1 in ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.113 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2dacd3c2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.113 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2d7149-b253-4148-9904-c6d6a0aff839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.114 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[df3f7742-d1d2-41a9-bdeb-f9e1824c2ec1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 systemd-udevd[294740]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:00:14 np0005466030 systemd-machined[188247]: New machine qemu-80-instance-000000a4.
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.129 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0b619768-0c5a-4bbd-81ee-f52801d69c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 NetworkManager[44960]: <info>  [1759410014.1339] device (tap6b57c5d9-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:00:14 np0005466030 NetworkManager[44960]: <info>  [1759410014.1353] device (tap6b57c5d9-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.155 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9c5192-3040-41f9-9652-6389089ce003]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 systemd[1]: Started Virtual Machine qemu-80-instance-000000a4.
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:14Z|00688|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 ovn-installed in OVS
Oct  2 09:00:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:14Z|00689|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 up in Southbound
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.189 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b31bd5f2-5b03-4fe2-9c71-899345e6c45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.193 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[671f5cb4-0fa0-47d7-b8a2-36f9fe7d453c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 NetworkManager[44960]: <info>  [1759410014.1946] manager: (tap2dacd3c2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/322)
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.229 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[972df981-21de-4f3f-943a-23428519ac44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.233 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[24ec1675-4e2c-42e8-b2a7-2ce84f33451c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 NetworkManager[44960]: <info>  [1759410014.2561] device (tap2dacd3c2-a0): carrier: link connected
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.263 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[79101ddc-ab4a-4124-a22e-a6bea87a305c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.286 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ae213d0f-0a69-4f59-8071-6e391dd8dfa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dacd3c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:7e:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787781, 'reachable_time': 34970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294773, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.306 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ec790238-84de-4969-8796-42a9a751b525]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:7e5f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787781, 'tstamp': 787781}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294774, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[faec8ba3-afd1-41be-ae15-afefda879ad5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dacd3c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:7e:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787781, 'reachable_time': 34970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294775, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56b88836-cb03-41b4-9cb0-63702c3ea5e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.422 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed406c8-011c-4527-9eab-8d979c295695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dacd3c2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.425 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dacd3c2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:14 np0005466030 kernel: tap2dacd3c2-a0: entered promiscuous mode
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 NetworkManager[44960]: <info>  [1759410014.4275] manager: (tap2dacd3c2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.429 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2dacd3c2-a0, col_values=(('external_ids', {'iface-id': '563b4b62-2487-404e-81e1-f7d5b24fae89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:14Z|00690|binding|INFO|Releasing lport 563b4b62-2487-404e-81e1-f7d5b24fae89 from this chassis (sb_readonly=0)
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.434 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.435 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[192e09a6-3ecf-4204-807e-d75e8bf23015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.435 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-2dacd3c2-a76f-4896-a922-fdbbab78ce12
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 2dacd3c2-a76f-4896-a922-fdbbab78ce12
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:00:14 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.436 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'env', 'PROCESS_TAG=haproxy-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2dacd3c2-a76f-4896-a922-fdbbab78ce12.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:14 np0005466030 podman[294959]: 2025-10-02 13:00:14.807756988 +0000 UTC m=+0.044542176 container create fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  2 09:00:14 np0005466030 systemd[1]: Started libpod-conmon-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a.scope.
Oct  2 09:00:14 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:00:14 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd525238fd6b0eee5e5f3a7ce21043fc7136395e02b913b6624af579e8cb88e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:00:14 np0005466030 podman[294959]: 2025-10-02 13:00:14.782891645 +0000 UTC m=+0.019676853 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:00:14 np0005466030 podman[294959]: 2025-10-02 13:00:14.891383455 +0000 UTC m=+0.128168673 container init fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.894 2 DEBUG nova.compute.manager [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.894 2 DEBUG oslo_concurrency.lockutils [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.895 2 DEBUG oslo_concurrency.lockutils [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.895 2 DEBUG oslo_concurrency.lockutils [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:14 np0005466030 nova_compute[230518]: 2025-10-02 13:00:14.896 2 DEBUG nova.compute.manager [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Processing event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:00:14 np0005466030 podman[294959]: 2025-10-02 13:00:14.897612559 +0000 UTC m=+0.134397747 container start fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 09:00:14 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : New worker (294986) forked
Oct  2 09:00:14 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : Loading success.
Oct  2 09:00:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.233 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410015.2335742, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.234 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Started (Lifecycle Event)#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.236 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.240 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.243 2 INFO nova.virt.libvirt.driver [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance spawned successfully.#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.243 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.258 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.263 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.266 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.266 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.267 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.267 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.268 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.268 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.290 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.290 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410015.2336886, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.290 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.314 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.317 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410015.238941, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.317 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.327 2 INFO nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 7.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.328 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.336 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.340 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.370 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.391 2 INFO nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 8.99 seconds to build instance.#033[00m
Oct  2 09:00:15 np0005466030 nova_compute[230518]: 2025-10-02 13:00:15.411 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:00:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:00:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:00:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:00:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.028 2 DEBUG nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.029 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.029 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.030 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.030 2 DEBUG nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] No waiting events found dispatching network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.030 2 WARNING nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received unexpected event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:00:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:17.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.526 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.527 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.527 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:17 np0005466030 nova_compute[230518]: 2025-10-02 13:00:17.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:17 np0005466030 podman[295010]: 2025-10-02 13:00:17.818980499 +0000 UTC m=+0.063508354 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:00:17 np0005466030 podman[295009]: 2025-10-02 13:00:17.819155674 +0000 UTC m=+0.063795142 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct  2 09:00:18 np0005466030 nova_compute[230518]: 2025-10-02 13:00:18.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:18 np0005466030 nova_compute[230518]: 2025-10-02 13:00:18.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:00:18 np0005466030 nova_compute[230518]: 2025-10-02 13:00:18.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:18 np0005466030 NetworkManager[44960]: <info>  [1759410018.5541] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct  2 09:00:18 np0005466030 NetworkManager[44960]: <info>  [1759410018.5554] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Oct  2 09:00:18 np0005466030 nova_compute[230518]: 2025-10-02 13:00:18.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:18Z|00691|binding|INFO|Releasing lport 563b4b62-2487-404e-81e1-f7d5b24fae89 from this chassis (sb_readonly=0)
Oct  2 09:00:18 np0005466030 nova_compute[230518]: 2025-10-02 13:00:18.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.025 2 DEBUG nova.compute.manager [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.025 2 DEBUG nova.compute.manager [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing instance network info cache due to event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.026 2 DEBUG oslo_concurrency.lockutils [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.026 2 DEBUG oslo_concurrency.lockutils [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.026 2 DEBUG nova.network.neutron [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:00:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.196 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.197 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.197 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.199 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.200 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.201 2 INFO nova.compute.manager [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Terminating instance#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.202 2 DEBUG nova.compute.manager [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:00:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:19 np0005466030 kernel: tap6b57c5d9-dd (unregistering): left promiscuous mode
Oct  2 09:00:19 np0005466030 NetworkManager[44960]: <info>  [1759410019.3983] device (tap6b57c5d9-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:00:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:19Z|00692|binding|INFO|Releasing lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 from this chassis (sb_readonly=0)
Oct  2 09:00:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:19Z|00693|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 down in Southbound
Oct  2 09:00:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:19Z|00694|binding|INFO|Removing iface tap6b57c5d9-dd ovn-installed in OVS
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.415 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:44:86 10.100.0.10'], port_security=['fa:16:3e:6b:44:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '11d532af-2778-4065-8cf4-f2f53d3dbb1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a001cef-b85b-4c88-a329-8db2a6ee024d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6b57c5d9-dd16-427a-85c2-c02dedb41e29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.416 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 in datapath 2dacd3c2-a76f-4896-a922-fdbbab78ce12 unbound from our chassis#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.417 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2dacd3c2-a76f-4896-a922-fdbbab78ce12, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.419 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[99218535-59bf-48bf-a99d-beff3500d7d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.423 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 namespace which is not needed anymore#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Oct  2 09:00:19 np0005466030 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a4.scope: Consumed 5.082s CPU time.
Oct  2 09:00:19 np0005466030 systemd-machined[188247]: Machine qemu-80-instance-000000a4 terminated.
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : haproxy version is 2.8.14-c23fe91
Oct  2 09:00:19 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : path to executable is /usr/sbin/haproxy
Oct  2 09:00:19 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [WARNING]  (294982) : Exiting Master process...
Oct  2 09:00:19 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [WARNING]  (294982) : Exiting Master process...
Oct  2 09:00:19 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [ALERT]    (294982) : Current worker (294986) exited with code 143 (Terminated)
Oct  2 09:00:19 np0005466030 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [WARNING]  (294982) : All workers exited. Exiting... (0)
Oct  2 09:00:19 np0005466030 systemd[1]: libpod-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a.scope: Deactivated successfully.
Oct  2 09:00:19 np0005466030 podman[295071]: 2025-10-02 13:00:19.56822333 +0000 UTC m=+0.045756973 container died fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:00:19 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a-userdata-shm.mount: Deactivated successfully.
Oct  2 09:00:19 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7cd525238fd6b0eee5e5f3a7ce21043fc7136395e02b913b6624af579e8cb88e-merged.mount: Deactivated successfully.
Oct  2 09:00:19 np0005466030 podman[295071]: 2025-10-02 13:00:19.615380305 +0000 UTC m=+0.092913948 container cleanup fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:00:19 np0005466030 systemd[1]: libpod-conmon-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a.scope: Deactivated successfully.
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.639 2 INFO nova.virt.libvirt.driver [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance destroyed successfully.#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.639 2 DEBUG nova.objects.instance [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 11d532af-2778-4065-8cf4-f2f53d3dbb1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.671 2 DEBUG nova.virt.libvirt.vif [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-109200331',display_name='tempest-TestNetworkBasicOps-server-109200331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-109200331',id=164,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyVsgu/K1Be3T6lU80AA5key24FWBwYCD9vm40G5BNpftGNMWHfF5cy51qUgLzMoZT/j2dR3TucUmm1S5UEzlRAZFpDfO//FNaDZljlZXVXY30xYKCpB4GbuFcySY9mIg==',key_name='tempest-TestNetworkBasicOps-433983614',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-6r3anh17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:15Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=11d532af-2778-4065-8cf4-f2f53d3dbb1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.672 2 DEBUG nova.network.os_vif_util [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.673 2 DEBUG nova.network.os_vif_util [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.673 2 DEBUG os_vif [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b57c5d9-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.681 2 INFO os_vif [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd')#033[00m
Oct  2 09:00:19 np0005466030 podman[295105]: 2025-10-02 13:00:19.694258596 +0000 UTC m=+0.050603933 container remove fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.700 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1e6646-ad89-4351-93c0-f2c76cc42a3b]: (4, ('Thu Oct  2 01:00:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 (fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a)\nfe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a\nThu Oct  2 01:00:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 (fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a)\nfe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.703 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33a5d916-a8c9-4014-af75-4d6219da5692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.704 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dacd3c2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:19 np0005466030 kernel: tap2dacd3c2-a0: left promiscuous mode
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.710 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc1dc6e-2f80-4a1a-b9ad-5a6a6c20b9b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.732 2 DEBUG nova.compute.manager [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.733 2 DEBUG oslo_concurrency.lockutils [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.733 2 DEBUG oslo_concurrency.lockutils [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.733 2 DEBUG oslo_concurrency.lockutils [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.734 2 DEBUG nova.compute.manager [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] No waiting events found dispatching network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.734 2 DEBUG nova.compute.manager [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.740 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac79de9c-a937-48d7-b730-cfccdba7a517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.742 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6696b250-fc5b-4743-9fbd-faf6094be684]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.756 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f179cc-b06d-44f2-84b5-81c3c27272cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787774, 'reachable_time': 31381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295143, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 systemd[1]: run-netns-ovnmeta\x2d2dacd3c2\x2da76f\x2d4896\x2da922\x2dfdbbab78ce12.mount: Deactivated successfully.
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.760 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:00:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.760 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[02553841-8632-4db0-9055-bbff2a94d41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:19 np0005466030 nova_compute[230518]: 2025-10-02 13:00:19.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:20 np0005466030 nova_compute[230518]: 2025-10-02 13:00:20.401 2 DEBUG nova.network.neutron [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updated VIF entry in instance network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:00:20 np0005466030 nova_compute[230518]: 2025-10-02 13:00:20.402 2 DEBUG nova.network.neutron [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:20 np0005466030 nova_compute[230518]: 2025-10-02 13:00:20.425 2 DEBUG oslo_concurrency.lockutils [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.097 2 INFO nova.virt.libvirt.driver [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deleting instance files /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c_del#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.098 2 INFO nova.virt.libvirt.driver [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deletion of /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c_del complete#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.148 2 INFO nova.compute.manager [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 1.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.148 2 DEBUG oslo.service.loopingcall [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.149 2 DEBUG nova.compute.manager [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.149 2 DEBUG nova.network.neutron [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:00:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:00:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.860 2 DEBUG nova.compute.manager [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.860 2 DEBUG oslo_concurrency.lockutils [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.860 2 DEBUG oslo_concurrency.lockutils [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.861 2 DEBUG oslo_concurrency.lockutils [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.861 2 DEBUG nova.compute.manager [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] No waiting events found dispatching network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:21 np0005466030 nova_compute[230518]: 2025-10-02 13:00:21.861 2 WARNING nova.compute.manager [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received unexpected event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.270 2 DEBUG nova.network.neutron [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.312 2 INFO nova.compute.manager [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 1.16 seconds to deallocate network for instance.#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.369 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.370 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.421 2 DEBUG oslo_concurrency.processutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281194222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.828 2 DEBUG oslo_concurrency.processutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.834 2 DEBUG nova.compute.provider_tree [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.861 2 DEBUG nova.scheduler.client.report [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.879 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.901 2 INFO nova.scheduler.client.report [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 11d532af-2778-4065-8cf4-f2f53d3dbb1c#033[00m
Oct  2 09:00:22 np0005466030 nova_compute[230518]: 2025-10-02 13:00:22.954 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:23.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:23.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.295347) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023295393, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2401, "num_deletes": 253, "total_data_size": 5663441, "memory_usage": 5738160, "flush_reason": "Manual Compaction"}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023326089, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 3704781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60440, "largest_seqno": 62836, "table_properties": {"data_size": 3695028, "index_size": 6119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20943, "raw_average_key_size": 20, "raw_value_size": 3675280, "raw_average_value_size": 3638, "num_data_blocks": 265, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409820, "oldest_key_time": 1759409820, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 30813 microseconds, and 13208 cpu microseconds.
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.326157) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 3704781 bytes OK
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.326186) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.328669) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.328701) EVENT_LOG_v1 {"time_micros": 1759410023328690, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.328729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 5652693, prev total WAL file size 5652693, number of live WAL files 2.
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.331376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(3617KB)], [120(11MB)]
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023331434, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 15997508, "oldest_snapshot_seqno": -1}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 8881 keys, 14031804 bytes, temperature: kUnknown
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023445976, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 14031804, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13970908, "index_size": 37616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 229685, "raw_average_key_size": 25, "raw_value_size": 13811657, "raw_average_value_size": 1555, "num_data_blocks": 1474, "num_entries": 8881, "num_filter_entries": 8881, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.446426) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 14031804 bytes
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.449930) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.5 rd, 122.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.7 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 9407, records dropped: 526 output_compression: NoCompression
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.449968) EVENT_LOG_v1 {"time_micros": 1759410023449952, "job": 76, "event": "compaction_finished", "compaction_time_micros": 114685, "compaction_time_cpu_micros": 57473, "output_level": 6, "num_output_files": 1, "total_output_size": 14031804, "num_input_records": 9407, "num_output_records": 8881, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023451594, "job": 76, "event": "table_file_deletion", "file_number": 122}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023457372, "job": 76, "event": "table_file_deletion", "file_number": 120}
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.331184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:23 np0005466030 nova_compute[230518]: 2025-10-02 13:00:23.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:00:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:24 np0005466030 nova_compute[230518]: 2025-10-02 13:00:24.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:24 np0005466030 nova_compute[230518]: 2025-10-02 13:00:24.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:25.956 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:26 np0005466030 nova_compute[230518]: 2025-10-02 13:00:26.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:26 np0005466030 nova_compute[230518]: 2025-10-02 13:00:26.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:00:26 np0005466030 nova_compute[230518]: 2025-10-02 13:00:26.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:00:26 np0005466030 nova_compute[230518]: 2025-10-02 13:00:26.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:00:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:27.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.672041) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028672112, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 308, "num_deletes": 251, "total_data_size": 172085, "memory_usage": 178936, "flush_reason": "Manual Compaction"}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028675426, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 112598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62841, "largest_seqno": 63144, "table_properties": {"data_size": 110623, "index_size": 203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5610, "raw_average_key_size": 20, "raw_value_size": 106707, "raw_average_value_size": 385, "num_data_blocks": 9, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410023, "oldest_key_time": 1759410023, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 3459 microseconds, and 1479 cpu microseconds.
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675507) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 112598 bytes OK
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675551) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.677394) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.677422) EVENT_LOG_v1 {"time_micros": 1759410028677414, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.677442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 169846, prev total WAL file size 169846, number of live WAL files 2.
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.678208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(109KB)], [123(13MB)]
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028678247, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 14144402, "oldest_snapshot_seqno": -1}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 8649 keys, 10298843 bytes, temperature: kUnknown
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028769869, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 10298843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10244291, "index_size": 31848, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225070, "raw_average_key_size": 26, "raw_value_size": 10093856, "raw_average_value_size": 1167, "num_data_blocks": 1232, "num_entries": 8649, "num_filter_entries": 8649, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.770207) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10298843 bytes
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.771650) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.2 rd, 112.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(217.1) write-amplify(91.5) OK, records in: 9158, records dropped: 509 output_compression: NoCompression
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.771680) EVENT_LOG_v1 {"time_micros": 1759410028771667, "job": 78, "event": "compaction_finished", "compaction_time_micros": 91737, "compaction_time_cpu_micros": 25722, "output_level": 6, "num_output_files": 1, "total_output_size": 10298843, "num_input_records": 9158, "num_output_records": 8649, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028771871, "job": 78, "event": "table_file_deletion", "file_number": 125}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028776471, "job": 78, "event": "table_file_deletion", "file_number": 123}
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.678111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:28 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:00:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:29 np0005466030 nova_compute[230518]: 2025-10-02 13:00:29.061 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:00:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:29.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:29 np0005466030 nova_compute[230518]: 2025-10-02 13:00:29.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:29 np0005466030 nova_compute[230518]: 2025-10-02 13:00:29.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.352 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.353 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.390 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.484 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.484 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.491 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.492 2 INFO nova.compute.claims [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:00:30 np0005466030 nova_compute[230518]: 2025-10-02 13:00:30.664 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:31.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223990827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.197 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.205 2 DEBUG nova.compute.provider_tree [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.233 2 DEBUG nova.scheduler.client.report [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.255 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.256 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.306 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.306 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.325 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.348 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.447 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.450 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.451 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Creating image(s)#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.486 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.516 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.544 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.550 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.618 2 DEBUG nova.policy [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.621 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.622 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.622 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.623 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.648 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:31 np0005466030 nova_compute[230518]: 2025-10-02 13:00:31.651 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.152 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.231 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.418 2 DEBUG nova.objects.instance [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.435 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.436 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Ensure instance console log exists: /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.437 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.437 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.438 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:32 np0005466030 nova_compute[230518]: 2025-10-02 13:00:32.656 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Successfully created port: b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:00:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.154 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Successfully updated port: b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.198 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.199 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.199 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.286 2 DEBUG nova.compute.manager [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-changed-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.287 2 DEBUG nova.compute.manager [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Refreshing instance network info cache due to event network-changed-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.287 2 DEBUG oslo_concurrency.lockutils [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.387 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:00:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.636 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410019.636069, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.637 2 INFO nova.compute.manager [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:34 np0005466030 nova_compute[230518]: 2025-10-02 13:00:34.689 2 DEBUG nova.compute.manager [None req-6007e259-0fb7-4417-91fe-435e8c6f0ef2 - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.148 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance network_info: |[{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG oslo_concurrency.lockutils [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG nova.network.neutron [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Refreshing network info cache for port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.188 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start _get_guest_xml network_info=[{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.196 2 WARNING nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.202 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.203 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.206 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.207 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.208 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.209 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.209 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.210 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.210 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.211 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.211 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.211 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.212 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.212 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.212 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.213 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.217 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:35.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4257415503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.654 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.689 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:35 np0005466030 nova_compute[230518]: 2025-10-02 13:00:35.694 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4036831251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.175 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.178 2 DEBUG nova.virt.libvirt.vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-202037004',display_name='tempest-₡-202037004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--202037004',id=166,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-cvelnv0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:31Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=7c31bb0f-22b5-42a4-9b38-8ad3daac689f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.179 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.181 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.183 2 DEBUG nova.objects.instance [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.208 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <uuid>7c31bb0f-22b5-42a4-9b38-8ad3daac689f</uuid>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <name>instance-000000a6</name>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <nova:name>tempest-₡-202037004</nova:name>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:00:35</nova:creationTime>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <nova:port uuid="b0acc3a3-80b3-4ec7-97e7-2e5813eb8790">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <entry name="serial">7c31bb0f-22b5-42a4-9b38-8ad3daac689f</entry>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <entry name="uuid">7c31bb0f-22b5-42a4-9b38-8ad3daac689f</entry>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:d4:a1:f4"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <target dev="tapb0acc3a3-80"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/console.log" append="off"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:00:36 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:00:36 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:00:36 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:00:36 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.210 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Preparing to wait for external event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.210 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.210 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.211 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.212 2 DEBUG nova.virt.libvirt.vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-202037004',display_name='tempest-₡-202037004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--202037004',id=166,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-cvelnv0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:31Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=7c31bb0f-22b5-42a4-9b38-8ad3daac689f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.212 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.213 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.213 2 DEBUG os_vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.215 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0acc3a3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0acc3a3-80, col_values=(('external_ids', {'iface-id': 'b0acc3a3-80b3-4ec7-97e7-2e5813eb8790', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:a1:f4', 'vm-uuid': '7c31bb0f-22b5-42a4-9b38-8ad3daac689f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:36 np0005466030 NetworkManager[44960]: <info>  [1759410036.2232] manager: (tapb0acc3a3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.231 2 INFO os_vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80')#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.332 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.332 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.333 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:d4:a1:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.333 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Using config drive#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.462 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.748 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Creating config drive at /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.758 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofuyrm2o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.915 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofuyrm2o" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.944 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:36 np0005466030 nova_compute[230518]: 2025-10-02 13:00:36.947 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:37.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.308 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.310 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deleting local config drive /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config because it was imported into RBD.#033[00m
Oct  2 09:00:37 np0005466030 kernel: tapb0acc3a3-80: entered promiscuous mode
Oct  2 09:00:37 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:37Z|00695|binding|INFO|Claiming lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for this chassis.
Oct  2 09:00:37 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:37Z|00696|binding|INFO|b0acc3a3-80b3-4ec7-97e7-2e5813eb8790: Claiming fa:16:3e:d4:a1:f4 10.100.0.4
Oct  2 09:00:37 np0005466030 NetworkManager[44960]: <info>  [1759410037.3777] manager: (tapb0acc3a3-80): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:37 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:37Z|00697|binding|INFO|Setting lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 ovn-installed in OVS
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:37 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:37Z|00698|binding|INFO|Setting lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 up in Southbound
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.394 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:a1:f4 10.100.0.4'], port_security=['fa:16:3e:d4:a1:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7c31bb0f-22b5-42a4-9b38-8ad3daac689f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.395 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.396 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.410 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a63eb38-71ec-4d5c-9652-b76bad61e588]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.412 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap052f341a-01 in ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.415 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap052f341a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.415 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[61bc6c14-6115-4dcf-9478-f0f1f5c31d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.416 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e95402fb-104d-470b-a62b-7b0484898a2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 systemd-machined[188247]: New machine qemu-81-instance-000000a6.
Oct  2 09:00:37 np0005466030 systemd[1]: Started Virtual Machine qemu-81-instance-000000a6.
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.436 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[67fd4c74-555a-4e2f-bc33-8899d9958bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 systemd-udevd[295543]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:00:37 np0005466030 NetworkManager[44960]: <info>  [1759410037.4603] device (tapb0acc3a3-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:00:37 np0005466030 NetworkManager[44960]: <info>  [1759410037.4616] device (tapb0acc3a3-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.463 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b64e454e-1906-433e-bfeb-640c8ad9d5dc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.504 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9289c3-b576-442a-8381-decff9ad62b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.508 2 DEBUG nova.network.neutron [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updated VIF entry in instance network info cache for port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.509 2 DEBUG nova.network.neutron [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:37 np0005466030 systemd-udevd[295546]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.512 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0830cf38-81f0-4315-a6b2-17f0bf7d6384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 NetworkManager[44960]: <info>  [1759410037.5139] manager: (tap052f341a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/328)
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.539 2 DEBUG oslo_concurrency.lockutils [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.552 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a7eb2eb5-8100-4dc8-97e3-948170feffca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.557 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e2892c3a-eaae-430e-8000-71a06c3ca0fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 NetworkManager[44960]: <info>  [1759410037.5799] device (tap052f341a-00): carrier: link connected
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.587 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[df8390fc-1cb8-4be9-9aa1-f0963f2f70ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.624 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8d4984-776a-4793-9824-5b5977bacbc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295574, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.645 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b427cef8-622b-482a-93b9-4cc04bfca074]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790113, 'tstamp': 790113}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295575, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.678 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b8b5fd-babc-433d-b31f-110128d1e2ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295576, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.684 2 DEBUG nova.compute.manager [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.684 2 DEBUG oslo_concurrency.lockutils [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.684 2 DEBUG oslo_concurrency.lockutils [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.685 2 DEBUG oslo_concurrency.lockutils [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.685 2 DEBUG nova.compute.manager [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Processing event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.723 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e453310-93cc-4c58-8d23-1b949dec2532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.791 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0359ec-d8fe-4817-9b2d-b2f6e80b20fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:37 np0005466030 kernel: tap052f341a-00: entered promiscuous mode
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:37 np0005466030 NetworkManager[44960]: <info>  [1759410037.7965] manager: (tap052f341a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.799 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:37 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:37Z|00699|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.801 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.802 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[32a0f8ac-b8b2-455a-86a7-200243637917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.803 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 052f341a-0628-4183-a5e0-76312bc986c6
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:00:37 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.804 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'env', 'PROCESS_TAG=haproxy-052f341a-0628-4183-a5e0-76312bc986c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/052f341a-0628-4183-a5e0-76312bc986c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:00:37 np0005466030 nova_compute[230518]: 2025-10-02 13:00:37.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:38 np0005466030 podman[295650]: 2025-10-02 13:00:38.194499159 +0000 UTC m=+0.076598531 container create 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:00:38 np0005466030 podman[295650]: 2025-10-02 13:00:38.140538242 +0000 UTC m=+0.022637624 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:00:38 np0005466030 systemd[1]: Started libpod-conmon-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope.
Oct  2 09:00:38 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:00:38 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09e09a9c189154a28ffa2be38e9a5e659937380e00a8af389eda1472e8aeea1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:00:38 np0005466030 podman[295650]: 2025-10-02 13:00:38.332073733 +0000 UTC m=+0.214173105 container init 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  2 09:00:38 np0005466030 podman[295667]: 2025-10-02 13:00:38.332199777 +0000 UTC m=+0.081503613 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 09:00:38 np0005466030 podman[295650]: 2025-10-02 13:00:38.338467482 +0000 UTC m=+0.220566824 container start 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:00:38 np0005466030 podman[295664]: 2025-10-02 13:00:38.351106994 +0000 UTC m=+0.101456572 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 09:00:38 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : New worker (295714) forked
Oct  2 09:00:38 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : Loading success.
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.461 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.463 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410038.4612422, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.463 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Started (Lifecycle Event)#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.468 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.472 2 INFO nova.virt.libvirt.driver [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance spawned successfully.#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.473 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.498 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.512 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.517 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.518 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.519 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.519 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.520 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.520 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.546 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.546 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410038.4624336, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.547 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.576 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.580 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410038.4696395, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.580 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.591 2 INFO nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 7.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.592 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.602 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.605 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.630 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.656 2 INFO nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 8.21 seconds to build instance.#033[00m
Oct  2 09:00:38 np0005466030 nova_compute[230518]: 2025-10-02 13:00:38.673 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:39 np0005466030 nova_compute[230518]: 2025-10-02 13:00:39.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:39 np0005466030 nova_compute[230518]: 2025-10-02 13:00:39.767 2 DEBUG nova.compute.manager [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:39 np0005466030 nova_compute[230518]: 2025-10-02 13:00:39.767 2 DEBUG oslo_concurrency.lockutils [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:39 np0005466030 nova_compute[230518]: 2025-10-02 13:00:39.767 2 DEBUG oslo_concurrency.lockutils [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:39 np0005466030 nova_compute[230518]: 2025-10-02 13:00:39.768 2 DEBUG oslo_concurrency.lockutils [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:39 np0005466030 nova_compute[230518]: 2025-10-02 13:00:39.768 2 DEBUG nova.compute.manager [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] No waiting events found dispatching network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:00:39 np0005466030 nova_compute[230518]: 2025-10-02 13:00:39.768 2 WARNING nova.compute.manager [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received unexpected event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:00:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:41 np0005466030 nova_compute[230518]: 2025-10-02 13:00:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:43.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:44 np0005466030 nova_compute[230518]: 2025-10-02 13:00:44.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:45.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:45.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:45 np0005466030 nova_compute[230518]: 2025-10-02 13:00:45.893 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:45 np0005466030 nova_compute[230518]: 2025-10-02 13:00:45.894 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:45 np0005466030 nova_compute[230518]: 2025-10-02 13:00:45.920 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.034 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.034 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.044 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.045 2 INFO nova.compute.claims [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.180 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:00:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4136340579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.683 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.689 2 DEBUG nova.compute.provider_tree [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.713 2 DEBUG nova.scheduler.client.report [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.758 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.759 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.825 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.826 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.857 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.880 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.983 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.984 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:00:46 np0005466030 nova_compute[230518]: 2025-10-02 13:00:46.985 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Creating image(s)#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.013 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.045 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.086 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.093 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:47.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.159 2 DEBUG nova.policy [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.198 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.198 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.199 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.199 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.228 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.232 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 85538bf5-69f3-4c92-baf5-a998835df357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:47.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.516 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 85538bf5-69f3-4c92-baf5-a998835df357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.592 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.936 2 DEBUG nova.objects.instance [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 85538bf5-69f3-4c92-baf5-a998835df357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.960 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.961 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Ensure instance console log exists: /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.961 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.961 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:47 np0005466030 nova_compute[230518]: 2025-10-02 13:00:47.962 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:48 np0005466030 nova_compute[230518]: 2025-10-02 13:00:48.185 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Successfully created port: 1f164634-caf5-4a9f-b0af-a47e5681a252 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:00:48 np0005466030 podman[295913]: 2025-10-02 13:00:48.821392738 +0000 UTC m=+0.069807320 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:00:48 np0005466030 podman[295912]: 2025-10-02 13:00:48.85042393 +0000 UTC m=+0.095043364 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:00:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:49.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:49 np0005466030 nova_compute[230518]: 2025-10-02 13:00:49.172 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Successfully updated port: 1f164634-caf5-4a9f-b0af-a47e5681a252 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:00:49 np0005466030 nova_compute[230518]: 2025-10-02 13:00:49.188 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:49 np0005466030 nova_compute[230518]: 2025-10-02 13:00:49.188 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:49 np0005466030 nova_compute[230518]: 2025-10-02 13:00:49.189 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:00:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:49.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:49 np0005466030 nova_compute[230518]: 2025-10-02 13:00:49.542 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:00:49 np0005466030 nova_compute[230518]: 2025-10-02 13:00:49.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.312 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updating instance_info_cache with network_info: [{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.339 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.339 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance network_info: |[{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.342 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start _get_guest_xml network_info=[{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.347 2 WARNING nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.351 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.352 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.355 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.356 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.357 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.358 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.358 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.360 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.360 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.360 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.361 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.361 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.364 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3463278456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.865 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.890 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.894 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.923 2 DEBUG nova.compute.manager [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-changed-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG nova.compute.manager [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Refreshing instance network info cache due to event network-changed-1f164634-caf5-4a9f-b0af-a47e5681a252. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG oslo_concurrency.lockutils [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG oslo_concurrency.lockutils [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:00:50 np0005466030 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG nova.network.neutron [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Refreshing network info cache for port 1f164634-caf5-4a9f-b0af-a47e5681a252 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:00:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:00:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:51.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:00:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:00:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1981855118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.326 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.328 2 DEBUG nova.virt.libvirt.vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-350886286',display_name='tempest-ServersTestJSON-server-350886286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-350886286',id=169,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-deu6rtq4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:46Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=85538bf5-69f3-4c92-baf5-a998835df357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.328 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.328 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.329 2 DEBUG nova.objects.instance [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 85538bf5-69f3-4c92-baf5-a998835df357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.358 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <uuid>85538bf5-69f3-4c92-baf5-a998835df357</uuid>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <name>instance-000000a9</name>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersTestJSON-server-350886286</nova:name>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:00:50</nova:creationTime>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <nova:port uuid="1f164634-caf5-4a9f-b0af-a47e5681a252">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <entry name="serial">85538bf5-69f3-4c92-baf5-a998835df357</entry>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <entry name="uuid">85538bf5-69f3-4c92-baf5-a998835df357</entry>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/85538bf5-69f3-4c92-baf5-a998835df357_disk">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/85538bf5-69f3-4c92-baf5-a998835df357_disk.config">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:53:b7:17"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <target dev="tap1f164634-ca"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/console.log" append="off"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:00:51 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:00:51 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:00:51 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:00:51 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.359 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Preparing to wait for external event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.360 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.360 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.360 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.361 2 DEBUG nova.virt.libvirt.vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-350886286',display_name='tempest-ServersTestJSON-server-350886286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-350886286',id=169,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-deu6rtq4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:46Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=85538bf5-69f3-4c92-baf5-a998835df357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.361 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.361 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.362 2 DEBUG os_vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.365 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.365 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f164634-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.369 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f164634-ca, col_values=(('external_ids', {'iface-id': '1f164634-caf5-4a9f-b0af-a47e5681a252', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:b7:17', 'vm-uuid': '85538bf5-69f3-4c92-baf5-a998835df357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:51 np0005466030 NetworkManager[44960]: <info>  [1759410051.3719] manager: (tap1f164634-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.377 2 INFO os_vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca')#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.490 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.491 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.492 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:53:b7:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.493 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Using config drive#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.528 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.946 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Creating config drive at /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config#033[00m
Oct  2 09:00:51 np0005466030 nova_compute[230518]: 2025-10-02 13:00:51.950 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7blrrk_v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.082 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7blrrk_v" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.118 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.122 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config 85538bf5-69f3-4c92-baf5-a998835df357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:00:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:52Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:a1:f4 10.100.0.4
Oct  2 09:00:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:52Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:a1:f4 10.100.0.4
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.456 2 DEBUG nova.network.neutron [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updated VIF entry in instance network info cache for port 1f164634-caf5-4a9f-b0af-a47e5681a252. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.458 2 DEBUG nova.network.neutron [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updating instance_info_cache with network_info: [{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.529 2 DEBUG oslo_concurrency.lockutils [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.581 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config 85538bf5-69f3-4c92-baf5-a998835df357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.582 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deleting local config drive /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config because it was imported into RBD.#033[00m
Oct  2 09:00:52 np0005466030 kernel: tap1f164634-ca: entered promiscuous mode
Oct  2 09:00:52 np0005466030 NetworkManager[44960]: <info>  [1759410052.6477] manager: (tap1f164634-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Oct  2 09:00:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:52Z|00700|binding|INFO|Claiming lport 1f164634-caf5-4a9f-b0af-a47e5681a252 for this chassis.
Oct  2 09:00:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:52Z|00701|binding|INFO|1f164634-caf5-4a9f-b0af-a47e5681a252: Claiming fa:16:3e:53:b7:17 10.100.0.5
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.670 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b7:17 10.100.0.5'], port_security=['fa:16:3e:53:b7:17 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85538bf5-69f3-4c92-baf5-a998835df357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1f164634-caf5-4a9f-b0af-a47e5681a252) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.673 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1f164634-caf5-4a9f-b0af-a47e5681a252 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis#033[00m
Oct  2 09:00:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:52Z|00702|binding|INFO|Setting lport 1f164634-caf5-4a9f-b0af-a47e5681a252 ovn-installed in OVS
Oct  2 09:00:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:00:52Z|00703|binding|INFO|Setting lport 1f164634-caf5-4a9f-b0af-a47e5681a252 up in Southbound
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.677 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:52 np0005466030 systemd-udevd[296086]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.700 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72e1401d-9e4d-4313-b769-46f3c5c02f65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:52 np0005466030 systemd-machined[188247]: New machine qemu-82-instance-000000a9.
Oct  2 09:00:52 np0005466030 NetworkManager[44960]: <info>  [1759410052.7117] device (tap1f164634-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:00:52 np0005466030 NetworkManager[44960]: <info>  [1759410052.7145] device (tap1f164634-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:00:52 np0005466030 systemd[1]: Started Virtual Machine qemu-82-instance-000000a9.
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.759 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6afe6b-df95-4952-a11c-e9c35c61ad89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.767 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[db456c06-b926-43c4-b8bf-71e838e19f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.803 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b5d6dd-26f9-4782-9344-56c13100533f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.832 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8862787b-a6e7-482e-8158-1ef1c5d6e60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296099, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.859 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6573aac8-ffb1-4dbb-8769-5da595758d93]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790131, 'tstamp': 790131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296101, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790134, 'tstamp': 790134}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296101, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.862 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:52 np0005466030 nova_compute[230518]: 2025-10-02 13:00:52.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.865 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.865 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.866 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:00:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.867 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:00:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:53.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:53.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.800 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410053.800076, 85538bf5-69f3-4c92-baf5-a998835df357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.801 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Started (Lifecycle Event)#033[00m
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.826 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.829 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410053.8001754, 85538bf5-69f3-4c92-baf5-a998835df357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.830 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.847 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.850 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:53 np0005466030 nova_compute[230518]: 2025-10-02 13:00:53.878 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:54 np0005466030 nova_compute[230518]: 2025-10-02 13:00:54.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:55.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:56 np0005466030 nova_compute[230518]: 2025-10-02 13:00:56.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:00:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:00:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:57.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.775 2 DEBUG nova.compute.manager [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.775 2 DEBUG oslo_concurrency.lockutils [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.776 2 DEBUG oslo_concurrency.lockutils [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.776 2 DEBUG oslo_concurrency.lockutils [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.777 2 DEBUG nova.compute.manager [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Processing event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.778 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.784 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.785 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410057.7845087, 85538bf5-69f3-4c92-baf5-a998835df357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.785 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.788 2 INFO nova.virt.libvirt.driver [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance spawned successfully.#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.789 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.816 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.822 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.827 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.827 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.828 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.829 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.829 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.830 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.864 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.905 2 INFO nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 10.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.906 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.973 2 INFO nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 12.01 seconds to build instance.#033[00m
Oct  2 09:00:57 np0005466030 nova_compute[230518]: 2025-10-02 13:00:57.992 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:00:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:59.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:00:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:00:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:59.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:00:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:00:59 np0005466030 nova_compute[230518]: 2025-10-02 13:00:59.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.216 2 DEBUG nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.217 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.218 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.219 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.219 2 DEBUG nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] No waiting events found dispatching network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.220 2 WARNING nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received unexpected event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.705 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.705 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.706 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.706 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.706 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.707 2 INFO nova.compute.manager [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Terminating instance#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.708 2 DEBUG nova.compute.manager [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:01:00 np0005466030 kernel: tap1f164634-ca (unregistering): left promiscuous mode
Oct  2 09:01:00 np0005466030 NetworkManager[44960]: <info>  [1759410060.7499] device (tap1f164634-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:00Z|00704|binding|INFO|Releasing lport 1f164634-caf5-4a9f-b0af-a47e5681a252 from this chassis (sb_readonly=0)
Oct  2 09:01:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:00Z|00705|binding|INFO|Setting lport 1f164634-caf5-4a9f-b0af-a47e5681a252 down in Southbound
Oct  2 09:01:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:00Z|00706|binding|INFO|Removing iface tap1f164634-ca ovn-installed in OVS
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.770 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b7:17 10.100.0.5'], port_security=['fa:16:3e:53:b7:17 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85538bf5-69f3-4c92-baf5-a998835df357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1f164634-caf5-4a9f-b0af-a47e5681a252) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.771 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1f164634-caf5-4a9f-b0af-a47e5681a252 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.773 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.793 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01e2f8e2-3966-4429-b4de-68dc975a47a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.823 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[145150bd-56b1-4ec0-9381-849cc7f64678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:00 np0005466030 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.825 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f68dc402-2143-4802-a203-ced730c03032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:00 np0005466030 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a9.scope: Consumed 3.962s CPU time.
Oct  2 09:01:00 np0005466030 systemd-machined[188247]: Machine qemu-82-instance-000000a9 terminated.
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.855 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[10e667e3-bae2-4e5d-abf3-f4317c4312e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[972fb6ac-ab7b-4efb-87f1-a9d875934eb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296155, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3ccfe9-0e43-4ea9-9be6-8ef3b69443d6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790131, 'tstamp': 790131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296156, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790134, 'tstamp': 790134}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296156, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.889 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.896 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.952 2 INFO nova.virt.libvirt.driver [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance destroyed successfully.#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.953 2 DEBUG nova.objects.instance [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 85538bf5-69f3-4c92-baf5-a998835df357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.968 2 DEBUG nova.virt.libvirt.vif [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-350886286',display_name='tempest-ServersTestJSON-server-350886286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-350886286',id=169,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-deu6rtq4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:57Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=85538bf5-69f3-4c92-baf5-a998835df357,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.968 2 DEBUG nova.network.os_vif_util [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.969 2 DEBUG nova.network.os_vif_util [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.969 2 DEBUG os_vif [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f164634-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:01:00 np0005466030 nova_compute[230518]: 2025-10-02 13:01:00.977 2 INFO os_vif [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca')#033[00m
Oct  2 09:01:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:01.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:01 np0005466030 nova_compute[230518]: 2025-10-02 13:01:01.580 2 INFO nova.virt.libvirt.driver [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deleting instance files /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357_del#033[00m
Oct  2 09:01:01 np0005466030 nova_compute[230518]: 2025-10-02 13:01:01.583 2 INFO nova.virt.libvirt.driver [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deletion of /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357_del complete#033[00m
Oct  2 09:01:01 np0005466030 nova_compute[230518]: 2025-10-02 13:01:01.649 2 INFO nova.compute.manager [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:01:01 np0005466030 nova_compute[230518]: 2025-10-02 13:01:01.650 2 DEBUG oslo.service.loopingcall [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:01:01 np0005466030 nova_compute[230518]: 2025-10-02 13:01:01.651 2 DEBUG nova.compute.manager [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:01:01 np0005466030 nova_compute[230518]: 2025-10-02 13:01:01.652 2 DEBUG nova.network.neutron [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.369 2 DEBUG nova.compute.manager [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-unplugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.371 2 DEBUG oslo_concurrency.lockutils [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.371 2 DEBUG oslo_concurrency.lockutils [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.371 2 DEBUG oslo_concurrency.lockutils [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.372 2 DEBUG nova.compute.manager [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] No waiting events found dispatching network-vif-unplugged-1f164634-caf5-4a9f-b0af-a47e5681a252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.372 2 DEBUG nova.compute.manager [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-unplugged-1f164634-caf5-4a9f-b0af-a47e5681a252 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:01:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:02.721 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:02.723 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.764 2 DEBUG nova.network.neutron [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.791 2 INFO nova.compute.manager [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 1.14 seconds to deallocate network for instance.#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.853 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.853 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.887 2 DEBUG nova.compute.manager [req-8b9deb92-5493-41f6-bdb7-ebe8b99630b2 req-d4c2d0b2-7b09-4428-94eb-e8c34bbf7386 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-deleted-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:02 np0005466030 nova_compute[230518]: 2025-10-02 13:01:02.949 2 DEBUG oslo_concurrency.processutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:03.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:03 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:03Z|00707|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:03 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:03Z|00708|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2464258050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.363 2 DEBUG oslo_concurrency.processutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.369 2 DEBUG nova.compute.provider_tree [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.386 2 DEBUG nova.scheduler.client.report [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.417 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.449 2 INFO nova.scheduler.client.report [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 85538bf5-69f3-4c92-baf5-a998835df357#033[00m
Oct  2 09:01:03 np0005466030 nova_compute[230518]: 2025-10-02 13:01:03.512 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:04 np0005466030 nova_compute[230518]: 2025-10-02 13:01:04.493 2 DEBUG nova.compute.manager [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:04 np0005466030 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG oslo_concurrency.lockutils [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:04 np0005466030 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG oslo_concurrency.lockutils [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:04 np0005466030 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG oslo_concurrency.lockutils [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:04 np0005466030 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG nova.compute.manager [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] No waiting events found dispatching network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:04 np0005466030 nova_compute[230518]: 2025-10-02 13:01:04.495 2 WARNING nova.compute.manager [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received unexpected event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:01:04 np0005466030 nova_compute[230518]: 2025-10-02 13:01:04.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:05.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:01:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:01:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:01:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:01:05 np0005466030 nova_compute[230518]: 2025-10-02 13:01:05.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:07.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:08 np0005466030 podman[296222]: 2025-10-02 13:01:08.866685538 +0000 UTC m=+0.111581049 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:01:08 np0005466030 podman[296221]: 2025-10-02 13:01:08.898220837 +0000 UTC m=+0.136592674 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:01:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:01:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:09.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:01:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:09.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:09 np0005466030 nova_compute[230518]: 2025-10-02 13:01:09.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:10 np0005466030 nova_compute[230518]: 2025-10-02 13:01:10.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:11.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:11.726 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:12 np0005466030 nova_compute[230518]: 2025-10-02 13:01:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.107 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.108 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.109 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.109 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.110 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2969665086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.599 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.684 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.684 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.944 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.946 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4088MB free_disk=20.841327667236328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.946 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:13 np0005466030 nova_compute[230518]: 2025-10-02 13:01:13.946 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.169 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.224 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.225 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.254 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.287 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.321 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/507289924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.739 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.744 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.767 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.798 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:01:14 np0005466030 nova_compute[230518]: 2025-10-02 13:01:14.798 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:15.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:15 np0005466030 nova_compute[230518]: 2025-10-02 13:01:15.952 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410060.9510663, 85538bf5-69f3-4c92-baf5-a998835df357 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:15 np0005466030 nova_compute[230518]: 2025-10-02 13:01:15.952 2 INFO nova.compute.manager [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:01:15 np0005466030 nova_compute[230518]: 2025-10-02 13:01:15.973 2 DEBUG nova.compute.manager [None req-ff0c47df-082b-420b-b08b-26a39df055f6 - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:15 np0005466030 nova_compute[230518]: 2025-10-02 13:01:15.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:18 np0005466030 nova_compute[230518]: 2025-10-02 13:01:18.785 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:18 np0005466030 nova_compute[230518]: 2025-10-02 13:01:18.786 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:18 np0005466030 nova_compute[230518]: 2025-10-02 13:01:18.786 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:18 np0005466030 nova_compute[230518]: 2025-10-02 13:01:18.787 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:18 np0005466030 nova_compute[230518]: 2025-10-02 13:01:18.787 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:01:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:19.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:19 np0005466030 nova_compute[230518]: 2025-10-02 13:01:19.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:19 np0005466030 podman[296313]: 2025-10-02 13:01:19.803534869 +0000 UTC m=+0.053111962 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:01:19 np0005466030 podman[296314]: 2025-10-02 13:01:19.825656765 +0000 UTC m=+0.064338059 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 09:01:20 np0005466030 nova_compute[230518]: 2025-10-02 13:01:20.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:01:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:21.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.672 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.672 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.730 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.977 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.978 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.989 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:01:21 np0005466030 nova_compute[230518]: 2025-10-02 13:01:21.990 2 INFO nova.compute.claims [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.141 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.309 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:01:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2188608974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.784 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.789 2 DEBUG nova.compute.provider_tree [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.849 2 DEBUG nova.scheduler.client.report [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.890 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.891 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.965 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.965 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:01:22 np0005466030 nova_compute[230518]: 2025-10-02 13:01:22.986 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.023 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:23.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.351 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.353 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.354 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Creating image(s)#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.396 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.440 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.485 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.492 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.596 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.598 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.599 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.600 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.639 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.643 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:23 np0005466030 nova_compute[230518]: 2025-10-02 13:01:23.672 2 DEBUG nova.policy [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.098 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.172 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.278 2 DEBUG nova.objects.instance [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 0386b301-1cd5-430d-8fc1-691b6bc3ad47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.318 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.319 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Ensure instance console log exists: /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.319 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.319 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.320 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:24 np0005466030 nova_compute[230518]: 2025-10-02 13:01:24.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:25.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:01:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:01:25 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:01:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:01:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:25.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:01:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:25.958 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:25.959 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:25 np0005466030 nova_compute[230518]: 2025-10-02 13:01:25.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:26 np0005466030 nova_compute[230518]: 2025-10-02 13:01:26.058 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Successfully created port: 11701be6-55ae-458a-a3a0-55a0c467ef46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.118 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 09:01:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:01:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:27.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.638 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.639 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.639 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:01:27 np0005466030 nova_compute[230518]: 2025-10-02 13:01:27.639 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.111 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Successfully updated port: 11701be6-55ae-458a-a3a0-55a0c467ef46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.135 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.135 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.135 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.592 2 DEBUG nova.compute.manager [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.593 2 DEBUG nova.compute.manager [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing instance network info cache due to event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.594 2 DEBUG oslo_concurrency.lockutils [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:28 np0005466030 nova_compute[230518]: 2025-10-02 13:01:28.766 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:01:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:29.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:29 np0005466030 nova_compute[230518]: 2025-10-02 13:01:29.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:30 np0005466030 nova_compute[230518]: 2025-10-02 13:01:30.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:31.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.423 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.474 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.475 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.538 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.574 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.575 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance network_info: |[{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.576 2 DEBUG oslo_concurrency.lockutils [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.576 2 DEBUG nova.network.neutron [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.582 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start _get_guest_xml network_info=[{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.589 2 WARNING nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.595 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.597 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.609 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.610 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.612 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.613 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.614 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.614 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.615 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.615 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.616 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.616 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.617 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.617 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.618 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.618 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:01:31 np0005466030 nova_compute[230518]: 2025-10-02 13:01:31.623 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2312900658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:32 np0005466030 nova_compute[230518]: 2025-10-02 13:01:32.127 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:32 np0005466030 nova_compute[230518]: 2025-10-02 13:01:32.156 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:32 np0005466030 nova_compute[230518]: 2025-10-02 13:01:32.160 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:01:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/204003872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:01:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.173 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.176 2 DEBUG nova.virt.libvirt.vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-597885113',display_name='tempest-TestNetworkBasicOps-server-597885113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-597885113',id=171,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3FuXpBgVbHgGBq4egCNtz2a9jD/YtFdw9deNrsDMTopNMA3CO5MDdqo+hovI4wpSsKj6W1YZIokPp0dIAbUjy55TePr3Cxog4gFZ9e9nz0xlxf36KAvzLNH0NRnqtk4g==',key_name='tempest-TestNetworkBasicOps-216720718',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-c0lf99pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:23Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=0386b301-1cd5-430d-8fc1-691b6bc3ad47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.176 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.177 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.179 2 DEBUG nova.objects.instance [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0386b301-1cd5-430d-8fc1-691b6bc3ad47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.196 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <uuid>0386b301-1cd5-430d-8fc1-691b6bc3ad47</uuid>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <name>instance-000000ab</name>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkBasicOps-server-597885113</nova:name>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:01:31</nova:creationTime>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <nova:port uuid="11701be6-55ae-458a-a3a0-55a0c467ef46">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <entry name="serial">0386b301-1cd5-430d-8fc1-691b6bc3ad47</entry>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <entry name="uuid">0386b301-1cd5-430d-8fc1-691b6bc3ad47</entry>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:55:f2:3d"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <target dev="tap11701be6-55"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/console.log" append="off"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:01:33 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:01:33 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:01:33 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:01:33 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.198 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Preparing to wait for external event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.198 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.199 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.199 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.200 2 DEBUG nova.virt.libvirt.vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-597885113',display_name='tempest-TestNetworkBasicOps-server-597885113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-597885113',id=171,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3FuXpBgVbHgGBq4egCNtz2a9jD/YtFdw9deNrsDMTopNMA3CO5MDdqo+hovI4wpSsKj6W1YZIokPp0dIAbUjy55TePr3Cxog4gFZ9e9nz0xlxf36KAvzLNH0NRnqtk4g==',key_name='tempest-TestNetworkBasicOps-216720718',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-c0lf99pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:23Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=0386b301-1cd5-430d-8fc1-691b6bc3ad47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.200 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.201 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.201 2 DEBUG os_vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11701be6-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11701be6-55, col_values=(('external_ids', {'iface-id': '11701be6-55ae-458a-a3a0-55a0c467ef46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:f2:3d', 'vm-uuid': '0386b301-1cd5-430d-8fc1-691b6bc3ad47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:33 np0005466030 NetworkManager[44960]: <info>  [1759410093.2125] manager: (tap11701be6-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.219 2 INFO os_vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55')#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.320 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.320 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.321 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:55:f2:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.322 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Using config drive#033[00m
Oct  2 09:01:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.356 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.809 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Creating config drive at /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.820 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wckt7u5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:33 np0005466030 nova_compute[230518]: 2025-10-02 13:01:33.987 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wckt7u5" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.042 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.050 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.286 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.287 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deleting local config drive /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config because it was imported into RBD.#033[00m
Oct  2 09:01:34 np0005466030 NetworkManager[44960]: <info>  [1759410094.3307] manager: (tap11701be6-55): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Oct  2 09:01:34 np0005466030 kernel: tap11701be6-55: entered promiscuous mode
Oct  2 09:01:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:34Z|00709|binding|INFO|Claiming lport 11701be6-55ae-458a-a3a0-55a0c467ef46 for this chassis.
Oct  2 09:01:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:34Z|00710|binding|INFO|11701be6-55ae-458a-a3a0-55a0c467ef46: Claiming fa:16:3e:55:f2:3d 10.100.0.13
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.343 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:f2:3d 10.100.0.13'], port_security=['fa:16:3e:55:f2:3d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0386b301-1cd5-430d-8fc1-691b6bc3ad47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'deb73c46-dc90-4a9e-bf71-ad13a1f027f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19ea678-25e4-4fc8-8150-d09743da6d71, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=11701be6-55ae-458a-a3a0-55a0c467ef46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.344 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 11701be6-55ae-458a-a3a0-55a0c467ef46 in datapath 1c43df3b-870e-48e7-aa17-655e3c34fe90 bound to our chassis#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.346 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c43df3b-870e-48e7-aa17-655e3c34fe90#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.357 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[357b668d-dc01-48a3-bae3-329e2f99ff25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.358 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c43df3b-81 in ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.359 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c43df3b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7121b895-5685-42af-bda6-6da8b4f3ab4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e75ca27b-9f0b-4c56-b669-a5ddcc845994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 systemd-machined[188247]: New machine qemu-83-instance-000000ab.
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.372 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2b06c3-7541-4c49-a902-b29ab09ad8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.396 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60624069-a6ed-4250-a349-a3e976764eb1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 systemd[1]: Started Virtual Machine qemu-83-instance-000000ab.
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:34Z|00711|binding|INFO|Setting lport 11701be6-55ae-458a-a3a0-55a0c467ef46 ovn-installed in OVS
Oct  2 09:01:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:34Z|00712|binding|INFO|Setting lport 11701be6-55ae-458a-a3a0-55a0c467ef46 up in Southbound
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:34 np0005466030 systemd-udevd[296812]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:01:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:34 np0005466030 NetworkManager[44960]: <info>  [1759410094.4240] device (tap11701be6-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:01:34 np0005466030 NetworkManager[44960]: <info>  [1759410094.4249] device (tap11701be6-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.429 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6432d229-c028-455e-9c69-899b4f695847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.434 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[733ddcf7-48f6-42f6-8d14-7870df2d5354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 NetworkManager[44960]: <info>  [1759410094.4357] manager: (tap1c43df3b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.470 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[71ccc93d-0aef-4c51-820d-88b056cce331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.473 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[64f34eed-2af5-4210-89bb-7289b8a18a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 NetworkManager[44960]: <info>  [1759410094.4977] device (tap1c43df3b-80): carrier: link connected
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.502 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f65daf90-722c-472c-b5db-ccbf577d7a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.518 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2353ed9f-b23d-4464-a53d-a118618faf6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c43df3b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:10:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795805, 'reachable_time': 44445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296841, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.530 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f4152c25-c4c3-4426-ab5c-472309b96cd4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:1092'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795805, 'tstamp': 795805}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296842, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.546 2 DEBUG nova.network.neutron [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updated VIF entry in instance network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.547 2 DEBUG nova.network.neutron [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e829abc5-4e97-449e-b95a-c43f8f059d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c43df3b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:10:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795805, 'reachable_time': 44445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296843, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.571 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[619c8fbf-3831-43a9-aee3-da8e02ced3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.572 2 DEBUG oslo_concurrency.lockutils [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.622 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[85b7d34e-d068-4c87-8210-b6889e2daca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.623 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c43df3b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c43df3b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:34 np0005466030 kernel: tap1c43df3b-80: entered promiscuous mode
Oct  2 09:01:34 np0005466030 NetworkManager[44960]: <info>  [1759410094.6282] manager: (tap1c43df3b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.629 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c43df3b-80, col_values=(('external_ids', {'iface-id': '07f078ea-d50b-4ace-9ee5-5241ea5b3915'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:01:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:34Z|00713|binding|INFO|Releasing lport 07f078ea-d50b-4ace-9ee5-5241ea5b3915 from this chassis (sb_readonly=0)
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.650 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c43df3b-870e-48e7-aa17-655e3c34fe90.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c43df3b-870e-48e7-aa17-655e3c34fe90.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.652 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d819f3b4-ac2f-4516-a531-18426602ca4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.654 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-1c43df3b-870e-48e7-aa17-655e3c34fe90
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/1c43df3b-870e-48e7-aa17-655e3c34fe90.pid.haproxy
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 1c43df3b-870e-48e7-aa17-655e3c34fe90
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:01:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.656 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'env', 'PROCESS_TAG=haproxy-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c43df3b-870e-48e7-aa17-655e3c34fe90.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.858 2 DEBUG nova.compute.manager [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.859 2 DEBUG oslo_concurrency.lockutils [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.859 2 DEBUG oslo_concurrency.lockutils [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.860 2 DEBUG oslo_concurrency.lockutils [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:34 np0005466030 nova_compute[230518]: 2025-10-02 13:01:34.860 2 DEBUG nova.compute.manager [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Processing event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:01:35 np0005466030 podman[296891]: 2025-10-02 13:01:35.05344999 +0000 UTC m=+0.067756576 container create fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:01:35 np0005466030 systemd[1]: Started libpod-conmon-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398.scope.
Oct  2 09:01:35 np0005466030 podman[296891]: 2025-10-02 13:01:35.016568524 +0000 UTC m=+0.030875140 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:01:35 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:01:35 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5abcd8d66535572ebd9243f79c67a937f00fbde504f937bccc9e255f563f50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:01:35 np0005466030 podman[296891]: 2025-10-02 13:01:35.144932452 +0000 UTC m=+0.159239058 container init fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:01:35 np0005466030 podman[296891]: 2025-10-02 13:01:35.150488984 +0000 UTC m=+0.164795580 container start fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 09:01:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:35 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : New worker (296937) forked
Oct  2 09:01:35 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : Loading success.
Oct  2 09:01:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:35.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.548 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410095.548259, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.549 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Started (Lifecycle Event)#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.550 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.554 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.562 2 INFO nova.virt.libvirt.driver [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance spawned successfully.#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.562 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.623 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.623 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.623 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.624 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.624 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.624 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.628 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.630 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.695 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.696 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410095.5505567, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.696 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.752 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.755 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410095.552723, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.756 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.764 2 INFO nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 12.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.764 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.802 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.807 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.866 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.883 2 INFO nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 14.01 seconds to build instance.#033[00m
Oct  2 09:01:35 np0005466030 nova_compute[230518]: 2025-10-02 13:01:35.921 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:37 np0005466030 nova_compute[230518]: 2025-10-02 13:01:37.238 2 DEBUG nova.compute.manager [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:37 np0005466030 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG oslo_concurrency.lockutils [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:01:37 np0005466030 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG oslo_concurrency.lockutils [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:01:37 np0005466030 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG oslo_concurrency.lockutils [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:01:37 np0005466030 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG nova.compute.manager [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] No waiting events found dispatching network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:01:37 np0005466030 nova_compute[230518]: 2025-10-02 13:01:37.240 2 WARNING nova.compute.manager [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received unexpected event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:01:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:37.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:01:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:01:38 np0005466030 nova_compute[230518]: 2025-10-02 13:01:38.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:39.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:39 np0005466030 nova_compute[230518]: 2025-10-02 13:01:39.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:39 np0005466030 podman[296998]: 2025-10-02 13:01:39.807668668 +0000 UTC m=+0.055690121 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  2 09:01:39 np0005466030 podman[296997]: 2025-10-02 13:01:39.842009475 +0000 UTC m=+0.081913066 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466030 NetworkManager[44960]: <info>  [1759410101.0824] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Oct  2 09:01:41 np0005466030 NetworkManager[44960]: <info>  [1759410101.0834] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct  2 09:01:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:41Z|00714|binding|INFO|Releasing lport 07f078ea-d50b-4ace-9ee5-5241ea5b3915 from this chassis (sb_readonly=0)
Oct  2 09:01:41 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:41Z|00715|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:41.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.713 2 DEBUG nova.compute.manager [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.713 2 DEBUG nova.compute.manager [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing instance network info cache due to event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.714 2 DEBUG oslo_concurrency.lockutils [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.714 2 DEBUG oslo_concurrency.lockutils [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.715 2 DEBUG nova.network.neutron [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:01:41 np0005466030 nova_compute[230518]: 2025-10-02 13:01:41.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:43 np0005466030 nova_compute[230518]: 2025-10-02 13:01:43.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:44 np0005466030 nova_compute[230518]: 2025-10-02 13:01:44.123 2 DEBUG nova.network.neutron [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updated VIF entry in instance network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:01:44 np0005466030 nova_compute[230518]: 2025-10-02 13:01:44.123 2 DEBUG nova.network.neutron [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:01:44 np0005466030 nova_compute[230518]: 2025-10-02 13:01:44.159 2 DEBUG oslo_concurrency.lockutils [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:01:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:44 np0005466030 nova_compute[230518]: 2025-10-02 13:01:44.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:45 np0005466030 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 09:01:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:45.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:47.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:48 np0005466030 nova_compute[230518]: 2025-10-02 13:01:48.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:01:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:49.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:01:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:49.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:49 np0005466030 nova_compute[230518]: 2025-10-02 13:01:49.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:50 np0005466030 podman[297043]: 2025-10-02 13:01:50.79957771 +0000 UTC m=+0.057685194 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  2 09:01:50 np0005466030 podman[297044]: 2025-10-02 13:01:50.809666343 +0000 UTC m=+0.063489674 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:01:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:51.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:01:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:01:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:01:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:01:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:51.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:51 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:51Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:f2:3d 10.100.0.13
Oct  2 09:01:51 np0005466030 ovn_controller[129257]: 2025-10-02T13:01:51Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:f2:3d 10.100.0.13
Oct  2 09:01:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:53.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:53 np0005466030 nova_compute[230518]: 2025-10-02 13:01:53.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:54 np0005466030 nova_compute[230518]: 2025-10-02 13:01:54.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:57.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:58 np0005466030 nova_compute[230518]: 2025-10-02 13:01:58.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:01:58 np0005466030 nova_compute[230518]: 2025-10-02 13:01:58.521 2 INFO nova.compute.manager [None req-5fbcd088-f2ff-4481-8577-112934977633 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Get console output#033[00m
Oct  2 09:01:58 np0005466030 nova_compute[230518]: 2025-10-02 13:01:58.526 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:01:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:01:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:59.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:01:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:01:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:01:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:01:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:01:59 np0005466030 nova_compute[230518]: 2025-10-02 13:01:59.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:00Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:f2:3d 10.100.0.13
Oct  2 09:02:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:00Z|00716|binding|INFO|Releasing lport 07f078ea-d50b-4ace-9ee5-5241ea5b3915 from this chassis (sb_readonly=0)
Oct  2 09:02:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:00Z|00717|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:02:00 np0005466030 nova_compute[230518]: 2025-10-02 13:02:00.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:02:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:02:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:03.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:03 np0005466030 nova_compute[230518]: 2025-10-02 13:02:03.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:03.945 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:03 np0005466030 nova_compute[230518]: 2025-10-02 13:02:03.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:03 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:03.947 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:02:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG nova.compute.manager [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG nova.compute.manager [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing instance network info cache due to event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG oslo_concurrency.lockutils [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG oslo_concurrency.lockutils [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.707 2 DEBUG nova.network.neutron [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.811 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.812 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.812 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.812 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.813 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.814 2 INFO nova.compute.manager [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Terminating instance#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.814 2 DEBUG nova.compute.manager [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:02:04 np0005466030 kernel: tap11701be6-55 (unregistering): left promiscuous mode
Oct  2 09:02:04 np0005466030 NetworkManager[44960]: <info>  [1759410124.9532] device (tap11701be6-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:04 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:04Z|00718|binding|INFO|Releasing lport 11701be6-55ae-458a-a3a0-55a0c467ef46 from this chassis (sb_readonly=0)
Oct  2 09:02:04 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:04Z|00719|binding|INFO|Setting lport 11701be6-55ae-458a-a3a0-55a0c467ef46 down in Southbound
Oct  2 09:02:04 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:04Z|00720|binding|INFO|Removing iface tap11701be6-55 ovn-installed in OVS
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.978 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:f2:3d 10.100.0.13'], port_security=['fa:16:3e:55:f2:3d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0386b301-1cd5-430d-8fc1-691b6bc3ad47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'deb73c46-dc90-4a9e-bf71-ad13a1f027f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19ea678-25e4-4fc8-8150-d09743da6d71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=11701be6-55ae-458a-a3a0-55a0c467ef46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.980 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 11701be6-55ae-458a-a3a0-55a0c467ef46 in datapath 1c43df3b-870e-48e7-aa17-655e3c34fe90 unbound from our chassis#033[00m
Oct  2 09:02:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.982 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c43df3b-870e-48e7-aa17-655e3c34fe90, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:02:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.983 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd74082-6c9a-43cb-a56f-557a690067c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.984 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 namespace which is not needed anymore#033[00m
Oct  2 09:02:04 np0005466030 nova_compute[230518]: 2025-10-02 13:02:04.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:05 np0005466030 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Oct  2 09:02:05 np0005466030 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ab.scope: Consumed 14.390s CPU time.
Oct  2 09:02:05 np0005466030 systemd-machined[188247]: Machine qemu-83-instance-000000ab terminated.
Oct  2 09:02:05 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : haproxy version is 2.8.14-c23fe91
Oct  2 09:02:05 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : path to executable is /usr/sbin/haproxy
Oct  2 09:02:05 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [WARNING]  (296935) : Exiting Master process...
Oct  2 09:02:05 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [WARNING]  (296935) : Exiting Master process...
Oct  2 09:02:05 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [ALERT]    (296935) : Current worker (296937) exited with code 143 (Terminated)
Oct  2 09:02:05 np0005466030 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [WARNING]  (296935) : All workers exited. Exiting... (0)
Oct  2 09:02:05 np0005466030 systemd[1]: libpod-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398.scope: Deactivated successfully.
Oct  2 09:02:05 np0005466030 podman[297108]: 2025-10-02 13:02:05.117397094 +0000 UTC m=+0.047802446 container died fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:02:05 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398-userdata-shm.mount: Deactivated successfully.
Oct  2 09:02:05 np0005466030 systemd[1]: var-lib-containers-storage-overlay-ad5abcd8d66535572ebd9243f79c67a937f00fbde504f937bccc9e255f563f50-merged.mount: Deactivated successfully.
Oct  2 09:02:05 np0005466030 podman[297108]: 2025-10-02 13:02:05.163201666 +0000 UTC m=+0.093606988 container cleanup fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:02:05 np0005466030 systemd[1]: libpod-conmon-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398.scope: Deactivated successfully.
Oct  2 09:02:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:05 np0005466030 podman[297139]: 2025-10-02 13:02:05.222817599 +0000 UTC m=+0.038908230 container remove fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.228 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cdc68c-091b-41da-b1a5-2afb81320976]: (4, ('Thu Oct  2 01:02:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 (fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398)\nfac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398\nThu Oct  2 01:02:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 (fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398)\nfac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.231 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b932d29-596a-424d-b57c-8cc33a6c1c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.231 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c43df3b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:05 np0005466030 kernel: tap1c43df3b-80: left promiscuous mode
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.254 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd06113-eb8d-45bb-a76d-91e86ff04fe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.254 2 INFO nova.virt.libvirt.driver [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance destroyed successfully.#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.254 2 DEBUG nova.objects.instance [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 0386b301-1cd5-430d-8fc1-691b6bc3ad47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.272 2 DEBUG nova.virt.libvirt.vif [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-597885113',display_name='tempest-TestNetworkBasicOps-server-597885113',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-597885113',id=171,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3FuXpBgVbHgGBq4egCNtz2a9jD/YtFdw9deNrsDMTopNMA3CO5MDdqo+hovI4wpSsKj6W1YZIokPp0dIAbUjy55TePr3Cxog4gFZ9e9nz0xlxf36KAvzLNH0NRnqtk4g==',key_name='tempest-TestNetworkBasicOps-216720718',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:01:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-c0lf99pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:01:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=0386b301-1cd5-430d-8fc1-691b6bc3ad47,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.272 2 DEBUG nova.network.os_vif_util [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.273 2 DEBUG nova.network.os_vif_util [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.273 2 DEBUG os_vif [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11701be6-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.280 2 INFO os_vif [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55')#033[00m
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.288 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0a5ded-4e36-42b4-81fa-84763206b499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.290 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da7d028d-c3ac-463b-87a2-43c96d6ce84b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.307 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c183813-f275-478b-b28b-3cfa3ee7e659]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795797, 'reachable_time': 38195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297179, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:05 np0005466030 systemd[1]: run-netns-ovnmeta\x2d1c43df3b\x2d870e\x2d48e7\x2daa17\x2d655e3c34fe90.mount: Deactivated successfully.
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.312 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:02:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.312 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3edd0b1d-e4e9-4b86-9956-74d5fc65d0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:02:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:02:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:02:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.517 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-unplugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.518 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.519 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.519 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.520 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] No waiting events found dispatching network-vif-unplugged-11701be6-55ae-458a-a3a0-55a0c467ef46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:05 np0005466030 nova_compute[230518]: 2025-10-02 13:02:05.520 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-unplugged-11701be6-55ae-458a-a3a0-55a0c467ef46 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:02:06 np0005466030 nova_compute[230518]: 2025-10-02 13:02:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:06 np0005466030 nova_compute[230518]: 2025-10-02 13:02:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:02:06 np0005466030 nova_compute[230518]: 2025-10-02 13:02:06.074 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:02:06 np0005466030 nova_compute[230518]: 2025-10-02 13:02:06.888 2 INFO nova.virt.libvirt.driver [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deleting instance files /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47_del#033[00m
Oct  2 09:02:06 np0005466030 nova_compute[230518]: 2025-10-02 13:02:06.889 2 INFO nova.virt.libvirt.driver [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deletion of /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47_del complete#033[00m
Oct  2 09:02:07 np0005466030 nova_compute[230518]: 2025-10-02 13:02:07.091 2 INFO nova.compute.manager [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 2.28 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:02:07 np0005466030 nova_compute[230518]: 2025-10-02 13:02:07.092 2 DEBUG oslo.service.loopingcall [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:02:07 np0005466030 nova_compute[230518]: 2025-10-02 13:02:07.092 2 DEBUG nova.compute.manager [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:02:07 np0005466030 nova_compute[230518]: 2025-10-02 13:02:07.092 2 DEBUG nova.network.neutron [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:02:07 np0005466030 nova_compute[230518]: 2025-10-02 13:02:07.188 2 DEBUG nova.network.neutron [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updated VIF entry in instance network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:07 np0005466030 nova_compute[230518]: 2025-10-02 13:02:07.188 2 DEBUG nova.network.neutron [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:07.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:07 np0005466030 nova_compute[230518]: 2025-10-02 13:02:07.255 2 DEBUG oslo_concurrency.lockutils [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:07.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.206 2 DEBUG nova.compute.manager [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.207 2 DEBUG oslo_concurrency.lockutils [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.207 2 DEBUG oslo_concurrency.lockutils [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.208 2 DEBUG oslo_concurrency.lockutils [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.208 2 DEBUG nova.compute.manager [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] No waiting events found dispatching network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.209 2 WARNING nova.compute.manager [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received unexpected event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.393 2 DEBUG nova.network.neutron [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.460 2 INFO nova.compute.manager [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 1.37 seconds to deallocate network for instance.#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.520 2 DEBUG nova.compute.manager [req-986a77e2-bfd5-4843-be87-665f42fb864f req-dc42d81b-5684-40a5-9772-dc2b8af22bb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-deleted-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.551 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.551 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:08 np0005466030 nova_compute[230518]: 2025-10-02 13:02:08.641 2 DEBUG oslo_concurrency.processutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1561182828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:09 np0005466030 nova_compute[230518]: 2025-10-02 13:02:09.061 2 DEBUG oslo_concurrency.processutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:09 np0005466030 nova_compute[230518]: 2025-10-02 13:02:09.067 2 DEBUG nova.compute.provider_tree [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:09 np0005466030 nova_compute[230518]: 2025-10-02 13:02:09.087 2 DEBUG nova.scheduler.client.report [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:09 np0005466030 nova_compute[230518]: 2025-10-02 13:02:09.120 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:09 np0005466030 nova_compute[230518]: 2025-10-02 13:02:09.148 2 INFO nova.scheduler.client.report [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 0386b301-1cd5-430d-8fc1-691b6bc3ad47#033[00m
Oct  2 09:02:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:09 np0005466030 nova_compute[230518]: 2025-10-02 13:02:09.243 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:09.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:09 np0005466030 nova_compute[230518]: 2025-10-02 13:02:09.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:10 np0005466030 nova_compute[230518]: 2025-10-02 13:02:10.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:10 np0005466030 podman[297207]: 2025-10-02 13:02:10.834676217 +0000 UTC m=+0.076237510 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:02:10 np0005466030 podman[297206]: 2025-10-02 13:02:10.868347653 +0000 UTC m=+0.115089626 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:02:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:11.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:11 np0005466030 nova_compute[230518]: 2025-10-02 13:02:11.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:11.949 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:12Z|00721|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:02:12 np0005466030 nova_compute[230518]: 2025-10-02 13:02:12.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:12Z|00722|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:02:12 np0005466030 nova_compute[230518]: 2025-10-02 13:02:12.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.074 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.097 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.097 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.098 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.098 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/513885224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.586 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.679 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.680 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.895 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.897 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4105MB free_disk=20.861236572265625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.897 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:14 np0005466030 nova_compute[230518]: 2025-10-02 13:02:14.897 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.067 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.068 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.069 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.138 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:15.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3848085517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.614 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.621 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.646 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.693 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:02:15 np0005466030 nova_compute[230518]: 2025-10-02 13:02:15.694 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:19.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:02:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:19.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:02:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:19 np0005466030 nova_compute[230518]: 2025-10-02 13:02:19.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:19 np0005466030 nova_compute[230518]: 2025-10-02 13:02:19.673 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:19 np0005466030 nova_compute[230518]: 2025-10-02 13:02:19.674 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:19 np0005466030 nova_compute[230518]: 2025-10-02 13:02:19.674 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:19 np0005466030 nova_compute[230518]: 2025-10-02 13:02:19.674 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:02:20 np0005466030 nova_compute[230518]: 2025-10-02 13:02:20.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:20 np0005466030 nova_compute[230518]: 2025-10-02 13:02:20.253 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410125.2516162, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:20 np0005466030 nova_compute[230518]: 2025-10-02 13:02:20.253 2 INFO nova.compute.manager [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:02:20 np0005466030 nova_compute[230518]: 2025-10-02 13:02:20.286 2 DEBUG nova.compute.manager [None req-5cd2c949-46d7-4212-ae38-168aa0759253 - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:20 np0005466030 nova_compute[230518]: 2025-10-02 13:02:20.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999972s ======
Oct  2 09:02:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:21.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999972s
Oct  2 09:02:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:02:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:02:21 np0005466030 podman[297295]: 2025-10-02 13:02:21.799634521 +0000 UTC m=+0.053952278 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 09:02:21 np0005466030 podman[297296]: 2025-10-02 13:02:21.801973834 +0000 UTC m=+0.048706784 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:02:23 np0005466030 nova_compute[230518]: 2025-10-02 13:02:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:23 np0005466030 nova_compute[230518]: 2025-10-02 13:02:23.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:23.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:23.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:24 np0005466030 nova_compute[230518]: 2025-10-02 13:02:24.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:24 np0005466030 nova_compute[230518]: 2025-10-02 13:02:24.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:25 np0005466030 nova_compute[230518]: 2025-10-02 13:02:25.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:25.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:25.959 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:25.959 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:27.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:28 np0005466030 nova_compute[230518]: 2025-10-02 13:02:28.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:28 np0005466030 nova_compute[230518]: 2025-10-02 13:02:28.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:02:28 np0005466030 nova_compute[230518]: 2025-10-02 13:02:28.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:02:28 np0005466030 nova_compute[230518]: 2025-10-02 13:02:28.680 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:28 np0005466030 nova_compute[230518]: 2025-10-02 13:02:28.681 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:28 np0005466030 nova_compute[230518]: 2025-10-02 13:02:28.681 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:02:28 np0005466030 nova_compute[230518]: 2025-10-02 13:02:28.681 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:29.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:29 np0005466030 nova_compute[230518]: 2025-10-02 13:02:29.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:30 np0005466030 nova_compute[230518]: 2025-10-02 13:02:30.053 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:30 np0005466030 nova_compute[230518]: 2025-10-02 13:02:30.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:30 np0005466030 nova_compute[230518]: 2025-10-02 13:02:30.080 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:02:30 np0005466030 nova_compute[230518]: 2025-10-02 13:02:30.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:31 np0005466030 nova_compute[230518]: 2025-10-02 13:02:31.075 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:02:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:31.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:33.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:33.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.239 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.240 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.263 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.376 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.376 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.388 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.389 2 INFO nova.compute.claims [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:02:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.548 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:34 np0005466030 nova_compute[230518]: 2025-10-02 13:02:34.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3707887180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.011 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.019 2 DEBUG nova.compute.provider_tree [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.056 2 DEBUG nova.scheduler.client.report [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.092 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.093 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.141 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.142 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.169 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.191 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:02:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:35.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.278 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.280 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.280 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Creating image(s)#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.308 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.332 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.362 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.366 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.433 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.434 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.435 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.436 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:35.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.465 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.469 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:35 np0005466030 nova_compute[230518]: 2025-10-02 13:02:35.670 2 DEBUG nova.policy [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:02:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Oct  2 09:02:36 np0005466030 nova_compute[230518]: 2025-10-02 13:02:36.814 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:36 np0005466030 nova_compute[230518]: 2025-10-02 13:02:36.886 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:02:37 np0005466030 nova_compute[230518]: 2025-10-02 13:02:37.137 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Successfully created port: 075c87dd-2b98-4364-9955-b21fcbcd5b47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:02:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:37 np0005466030 nova_compute[230518]: 2025-10-02 13:02:37.304 2 DEBUG nova.objects.instance [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:37 np0005466030 nova_compute[230518]: 2025-10-02 13:02:37.324 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:02:37 np0005466030 nova_compute[230518]: 2025-10-02 13:02:37.324 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Ensure instance console log exists: /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:02:37 np0005466030 nova_compute[230518]: 2025-10-02 13:02:37.325 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:37 np0005466030 nova_compute[230518]: 2025-10-02 13:02:37.325 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:37 np0005466030 nova_compute[230518]: 2025-10-02 13:02:37.325 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:37.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:38 np0005466030 podman[297691]: 2025-10-02 13:02:38.480938727 +0000 UTC m=+0.080822983 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 09:02:38 np0005466030 podman[297691]: 2025-10-02 13:02:38.569850889 +0000 UTC m=+0.169735165 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.080 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Successfully updated port: 075c87dd-2b98-4364-9955-b21fcbcd5b47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.103 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.103 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.103 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.233 2 DEBUG nova.compute.manager [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.234 2 DEBUG nova.compute.manager [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.234 2 DEBUG oslo_concurrency.lockutils [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:39.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:39.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.669 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:02:39 np0005466030 nova_compute[230518]: 2025-10-02 13:02:39.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:40 np0005466030 nova_compute[230518]: 2025-10-02 13:02:40.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:02:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:02:40 np0005466030 nova_compute[230518]: 2025-10-02 13:02:40.955 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.001 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.001 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance network_info: |[{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.002 2 DEBUG oslo_concurrency.lockutils [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.002 2 DEBUG nova.network.neutron [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.008 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start _get_guest_xml network_info=[{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.015 2 WARNING nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.022 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.023 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.034 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.035 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.037 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.037 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.038 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.038 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.039 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.039 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.039 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.040 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.040 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.040 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.041 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.041 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.046 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:41.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2609484403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.513 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.537 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:41 np0005466030 nova_compute[230518]: 2025-10-02 13:02:41.541 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:41 np0005466030 podman[298008]: 2025-10-02 13:02:41.828573541 +0000 UTC m=+0.062967337 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  2 09:02:41 np0005466030 podman[298007]: 2025-10-02 13:02:41.842169264 +0000 UTC m=+0.094318631 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:02:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:02:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1130314900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.037 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.040 2 DEBUG nova.virt.libvirt.vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-511078647',display_name='tempest-TestNetworkBasicOps-server-511078647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-511078647',id=176,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx6+OnWya4TKWI602K7FJTy0vvTR15qcn2a79LYqYLs4i+5cL4NrJf7MAy0xx98Y2Lu4xFova8uQh2TX9Sp+hRCxqeORgezwsMfN18SQyhFQii2RX1Yt01r5EbD581/cA==',key_name='tempest-TestNetworkBasicOps-68926903',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-a1p0qs88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=198c2dd4-f103-4bba-9fc3-9e41f44e465e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.041 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.043 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.046 2 DEBUG nova.objects.instance [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.065 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <uuid>198c2dd4-f103-4bba-9fc3-9e41f44e465e</uuid>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <name>instance-000000b0</name>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkBasicOps-server-511078647</nova:name>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:02:41</nova:creationTime>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <nova:port uuid="075c87dd-2b98-4364-9955-b21fcbcd5b47">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <entry name="serial">198c2dd4-f103-4bba-9fc3-9e41f44e465e</entry>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <entry name="uuid">198c2dd4-f103-4bba-9fc3-9e41f44e465e</entry>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:d9:79:9d"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <target dev="tap075c87dd-2b"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/console.log" append="off"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:02:42 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:02:42 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:02:42 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:02:42 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.068 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Preparing to wait for external event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.069 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.070 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.070 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.072 2 DEBUG nova.virt.libvirt.vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-511078647',display_name='tempest-TestNetworkBasicOps-server-511078647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-511078647',id=176,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx6+OnWya4TKWI602K7FJTy0vvTR15qcn2a79LYqYLs4i+5cL4NrJf7MAy0xx98Y2Lu4xFova8uQh2TX9Sp+hRCxqeORgezwsMfN18SQyhFQii2RX1Yt01r5EbD581/cA==',key_name='tempest-TestNetworkBasicOps-68926903',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-a1p0qs88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=198c2dd4-f103-4bba-9fc3-9e41f44e465e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.072 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.073 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.074 2 DEBUG os_vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.077 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap075c87dd-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap075c87dd-2b, col_values=(('external_ids', {'iface-id': '075c87dd-2b98-4364-9955-b21fcbcd5b47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:79:9d', 'vm-uuid': '198c2dd4-f103-4bba-9fc3-9e41f44e465e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:42 np0005466030 NetworkManager[44960]: <info>  [1759410162.0866] manager: (tap075c87dd-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.111 2 INFO os_vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b')#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.169 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.170 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.170 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:d9:79:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.171 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Using config drive#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.207 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.865 2 DEBUG nova.network.neutron [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.866 2 DEBUG nova.network.neutron [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.885 2 DEBUG oslo_concurrency.lockutils [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.891 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Creating config drive at /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config#033[00m
Oct  2 09:02:42 np0005466030 nova_compute[230518]: 2025-10-02 13:02:42.900 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4q5neb2n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:43 np0005466030 nova_compute[230518]: 2025-10-02 13:02:43.049 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4q5neb2n" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:43 np0005466030 nova_compute[230518]: 2025-10-02 13:02:43.101 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:43 np0005466030 nova_compute[230518]: 2025-10-02 13:02:43.106 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:43.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:43.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.006 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.899s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.007 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deleting local config drive /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config because it was imported into RBD.#033[00m
Oct  2 09:02:44 np0005466030 kernel: tap075c87dd-2b: entered promiscuous mode
Oct  2 09:02:44 np0005466030 NetworkManager[44960]: <info>  [1759410164.0731] manager: (tap075c87dd-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:44 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:44Z|00723|binding|INFO|Claiming lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 for this chassis.
Oct  2 09:02:44 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:44Z|00724|binding|INFO|075c87dd-2b98-4364-9955-b21fcbcd5b47: Claiming fa:16:3e:d9:79:9d 10.100.0.10
Oct  2 09:02:44 np0005466030 systemd-udevd[298122]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.128 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:79:9d 10.100.0.10'], port_security=['fa:16:3e:d9:79:9d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '198c2dd4-f103-4bba-9fc3-9e41f44e465e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9eb88c13-ce55-413a-bc29-2cb1397ffc60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=075c87dd-2b98-4364-9955-b21fcbcd5b47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.130 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 075c87dd-2b98-4364-9955-b21fcbcd5b47 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c bound to our chassis#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.132 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.145 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d9237b27-f22d-4b74-99cf-3968ec8dd816]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.147 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d2f6793-31 in ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.149 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d2f6793-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.149 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfb9de0-3922-4626-959c-fc29d8c51df9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 systemd-machined[188247]: New machine qemu-84-instance-000000b0.
Oct  2 09:02:44 np0005466030 NetworkManager[44960]: <info>  [1759410164.1516] device (tap075c87dd-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.151 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de44aae5-7530-4ea1-8e33-359ff2570208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 NetworkManager[44960]: <info>  [1759410164.1527] device (tap075c87dd-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.170 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[2438a633-fe72-4827-badd-ad259ba0e567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 systemd[1]: Started Virtual Machine qemu-84-instance-000000b0.
Oct  2 09:02:44 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:44Z|00725|binding|INFO|Setting lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 ovn-installed in OVS
Oct  2 09:02:44 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:44Z|00726|binding|INFO|Setting lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 up in Southbound
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.199390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164199442, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 252, "total_data_size": 3663000, "memory_usage": 3727520, "flush_reason": "Manual Compaction"}
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.204 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdce32e-eb4e-4d48-8f66-9d5a11bae468]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164216492, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 2407025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63149, "largest_seqno": 64824, "table_properties": {"data_size": 2400105, "index_size": 3926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14657, "raw_average_key_size": 19, "raw_value_size": 2385867, "raw_average_value_size": 3114, "num_data_blocks": 172, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410029, "oldest_key_time": 1759410029, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 17161 microseconds, and 10080 cpu microseconds.
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.216550) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 2407025 bytes OK
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.216576) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.218227) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.218239) EVENT_LOG_v1 {"time_micros": 1759410164218236, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.218253) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 3655200, prev total WAL file size 3655200, number of live WAL files 2.
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.223424) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353035' seq:0, type:0; will stop at (end)
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(2350KB)], [126(10057KB)]
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164223480, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12705868, "oldest_snapshot_seqno": -1}
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.248 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f88e2e-05c1-41ca-b68c-c7b5888d7b05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.255 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dca2d0b9-f22b-4c65-be5a-42158c974971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 NetworkManager[44960]: <info>  [1759410164.2577] manager: (tap0d2f6793-30): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.291 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e24866-7cb5-4be2-8bd1-84fbb99151a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.294 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc14b9e-c137-48d8-a620-2284aab6db14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 8892 keys, 11599262 bytes, temperature: kUnknown
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164305133, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 11599262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11542102, "index_size": 33815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 232032, "raw_average_key_size": 26, "raw_value_size": 11386343, "raw_average_value_size": 1280, "num_data_blocks": 1299, "num_entries": 8892, "num_filter_entries": 8892, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.305366) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11599262 bytes
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.306533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 142.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.8 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(10.1) write-amplify(4.8) OK, records in: 9415, records dropped: 523 output_compression: NoCompression
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.306548) EVENT_LOG_v1 {"time_micros": 1759410164306541, "job": 80, "event": "compaction_finished", "compaction_time_micros": 81703, "compaction_time_cpu_micros": 36362, "output_level": 6, "num_output_files": 1, "total_output_size": 11599262, "num_input_records": 9415, "num_output_records": 8892, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164306962, "job": 80, "event": "table_file_deletion", "file_number": 128}
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164308363, "job": 80, "event": "table_file_deletion", "file_number": 126}
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.223343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:44 np0005466030 NetworkManager[44960]: <info>  [1759410164.3186] device (tap0d2f6793-30): carrier: link connected
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.324 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[af9c2a8c-2a29-4671-8d15-9a39e31784b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.339 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8d9c92-4b6c-4720-9aea-44d723ffc1a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298158, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.353 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4601c058-52c4-4f7a-86d6-c3e67baa95f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:aab7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802787, 'tstamp': 802787}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298159, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.366 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad0c0e9-b2c5-4cca-b966-d9e6704eab0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298160, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.397 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee31da40-2694-4940-9198-36328beabd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.459 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5abf83d-5b21-4b5c-ba54-bd7df41fdfd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.460 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.461 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.461 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d2f6793-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:44 np0005466030 kernel: tap0d2f6793-30: entered promiscuous mode
Oct  2 09:02:44 np0005466030 NetworkManager[44960]: <info>  [1759410164.4643] manager: (tap0d2f6793-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.469 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d2f6793-30, col_values=(('external_ids', {'iface-id': '0dfea1be-4d56-45ad-8b1f-483fdf57471e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:02:44 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:44Z|00727|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.502 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d2f6793-3f74-40a0-b15c-09282dcbf27c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d2f6793-3f74-40a0-b15c-09282dcbf27c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1be15e84-c078-41d0-9e89-0769af063c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.505 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-0d2f6793-3f74-40a0-b15c-09282dcbf27c
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/0d2f6793-3f74-40a0-b15c-09282dcbf27c.pid.haproxy
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 0d2f6793-3f74-40a0-b15c-09282dcbf27c
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.506 2 DEBUG nova.compute.manager [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.507 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'env', 'PROCESS_TAG=haproxy-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d2f6793-3f74-40a0-b15c-09282dcbf27c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.507 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.508 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.509 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.509 2 DEBUG nova.compute.manager [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Processing event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:02:44 np0005466030 nova_compute[230518]: 2025-10-02 13:02:44.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:44 np0005466030 podman[298235]: 2025-10-02 13:02:44.880209229 +0000 UTC m=+0.024180843 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:02:45 np0005466030 podman[298235]: 2025-10-02 13:02:45.067598211 +0000 UTC m=+0.211569735 container create dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.128 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410165.1279624, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.129 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Started (Lifecycle Event)#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.131 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.135 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.139 2 INFO nova.virt.libvirt.driver [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance spawned successfully.#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.139 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.175 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:45 np0005466030 systemd[1]: Started libpod-conmon-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope.
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.181 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.191 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.192 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.192 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.193 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.193 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.194 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:02:45 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:02:45 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ed691c9efbcda2feeff6f5bd8ee2f1b7fe6192de23eebb1b94af40b0e8291e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.226 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.226 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410165.1280732, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.227 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.273 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.277 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410165.1343846, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.277 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:02:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:45.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:45 np0005466030 podman[298235]: 2025-10-02 13:02:45.291250861 +0000 UTC m=+0.435222405 container init dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:02:45 np0005466030 podman[298235]: 2025-10-02 13:02:45.299245288 +0000 UTC m=+0.443216792 container start dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.316 2 INFO nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 10.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.317 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.320 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:02:45 np0005466030 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : New worker (298255) forked
Oct  2 09:02:45 np0005466030 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : Loading success.
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.329 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.362 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.405 2 INFO nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 11.06 seconds to build instance.#033[00m
Oct  2 09:02:45 np0005466030 nova_compute[230518]: 2025-10-02 13:02:45.435 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:45.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Oct  2 09:02:46 np0005466030 nova_compute[230518]: 2025-10-02 13:02:46.586 2 DEBUG nova.compute.manager [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:46 np0005466030 nova_compute[230518]: 2025-10-02 13:02:46.588 2 DEBUG oslo_concurrency.lockutils [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:46 np0005466030 nova_compute[230518]: 2025-10-02 13:02:46.588 2 DEBUG oslo_concurrency.lockutils [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:46 np0005466030 nova_compute[230518]: 2025-10-02 13:02:46.589 2 DEBUG oslo_concurrency.lockutils [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:46 np0005466030 nova_compute[230518]: 2025-10-02 13:02:46.589 2 DEBUG nova.compute.manager [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:02:46 np0005466030 nova_compute[230518]: 2025-10-02 13:02:46.590 2 WARNING nova.compute.manager [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:02:47 np0005466030 nova_compute[230518]: 2025-10-02 13:02:47.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:02:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:47.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:47.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:49 np0005466030 NetworkManager[44960]: <info>  [1759410169.0869] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct  2 09:02:49 np0005466030 NetworkManager[44960]: <info>  [1759410169.0882] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:49 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:49Z|00728|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct  2 09:02:49 np0005466030 ovn_controller[129257]: 2025-10-02T13:02:49Z|00729|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:49.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.373 2 DEBUG nova.compute.manager [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.373 2 DEBUG nova.compute.manager [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.373 2 DEBUG oslo_concurrency.lockutils [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.374 2 DEBUG oslo_concurrency.lockutils [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.374 2 DEBUG nova.network.neutron [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:02:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:49.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:49 np0005466030 nova_compute[230518]: 2025-10-02 13:02:49.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:50 np0005466030 nova_compute[230518]: 2025-10-02 13:02:50.764 2 DEBUG nova.network.neutron [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:02:50 np0005466030 nova_compute[230518]: 2025-10-02 13:02:50.766 2 DEBUG nova.network.neutron [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:02:50 np0005466030 nova_compute[230518]: 2025-10-02 13:02:50.793 2 DEBUG oslo_concurrency.lockutils [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:02:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:51.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:51.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:52 np0005466030 nova_compute[230518]: 2025-10-02 13:02:52.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:52 np0005466030 podman[298316]: 2025-10-02 13:02:52.81721631 +0000 UTC m=+0.069332365 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 09:02:52 np0005466030 podman[298315]: 2025-10-02 13:02:52.837025835 +0000 UTC m=+0.089532622 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 09:02:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:53.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:53.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Oct  2 09:02:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:54 np0005466030 nova_compute[230518]: 2025-10-02 13:02:54.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:55.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:55.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:57 np0005466030 nova_compute[230518]: 2025-10-02 13:02:57.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:02:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:57.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:02:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:57.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.057 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.058 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.086 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.205 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.206 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.214 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.215 2 INFO nova.compute.claims [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.370 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:02:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1783475136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.893 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.902 2 DEBUG nova.compute.provider_tree [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.924 2 DEBUG nova.scheduler.client.report [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.955 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:58 np0005466030 nova_compute[230518]: 2025-10-02 13:02:58.956 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.036 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.037 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.047574) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179047763, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 468, "num_deletes": 257, "total_data_size": 550306, "memory_usage": 560448, "flush_reason": "Manual Compaction"}
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179053813, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 363267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64829, "largest_seqno": 65292, "table_properties": {"data_size": 360627, "index_size": 675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6460, "raw_average_key_size": 18, "raw_value_size": 355171, "raw_average_value_size": 1029, "num_data_blocks": 29, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410164, "oldest_key_time": 1759410164, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 6409 microseconds, and 3552 cpu microseconds.
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.053982) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 363267 bytes OK
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.054149) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.055400) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.055417) EVENT_LOG_v1 {"time_micros": 1759410179055411, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.055439) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 547385, prev total WAL file size 547385, number of live WAL files 2.
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.056390) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353137' seq:0, type:0; will stop at (end)
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(354KB)], [129(11MB)]
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179056503, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 11962529, "oldest_snapshot_seqno": -1}
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.065 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.089 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 8707 keys, 11813076 bytes, temperature: kUnknown
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179144047, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11813076, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11756387, "index_size": 33792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 229162, "raw_average_key_size": 26, "raw_value_size": 11603126, "raw_average_value_size": 1332, "num_data_blocks": 1295, "num_entries": 8707, "num_filter_entries": 8707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.144377) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11813076 bytes
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.145980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.5 rd, 134.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(65.4) write-amplify(32.5) OK, records in: 9237, records dropped: 530 output_compression: NoCompression
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.145995) EVENT_LOG_v1 {"time_micros": 1759410179145988, "job": 82, "event": "compaction_finished", "compaction_time_micros": 87617, "compaction_time_cpu_micros": 31203, "output_level": 6, "num_output_files": 1, "total_output_size": 11813076, "num_input_records": 9237, "num_output_records": 8707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179146156, "job": 82, "event": "table_file_deletion", "file_number": 131}
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179147722, "job": 82, "event": "table_file_deletion", "file_number": 129}
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.056018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.215 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.218 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.219 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Creating image(s)#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.271 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:02:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:59.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.325 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.360 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.364 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.409 2 DEBUG nova.policy [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:02:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.443 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.444 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.445 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.445 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:02:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:02:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:02:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:59.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.485 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.491 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:02:59 np0005466030 nova_compute[230518]: 2025-10-02 13:02:59.938 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:00Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:79:9d 10.100.0.10
Oct  2 09:03:00 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:00Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:79:9d 10.100.0.10
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.041 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.167 2 DEBUG nova.objects.instance [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.183 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.184 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Ensure instance console log exists: /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.184 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.185 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.185 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:00 np0005466030 nova_compute[230518]: 2025-10-02 13:03:00.198 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Successfully created port: 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:03:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:01.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:01.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.033 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Successfully updated port: 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.072 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.072 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.072 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.168 2 DEBUG nova.compute.manager [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.169 2 DEBUG nova.compute.manager [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing instance network info cache due to event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.169 2 DEBUG oslo_concurrency.lockutils [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:02 np0005466030 nova_compute[230518]: 2025-10-02 13:03:02.240 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:03:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:03.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.328 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.348 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.348 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance network_info: |[{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.349 2 DEBUG oslo_concurrency.lockutils [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.349 2 DEBUG nova.network.neutron [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.351 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start _get_guest_xml network_info=[{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.357 2 WARNING nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.361 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.361 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.364 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.364 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.365 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.365 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.366 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.366 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.366 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.368 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.368 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.370 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:03.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:03:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3110993335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.840 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.883 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:03 np0005466030 nova_compute[230518]: 2025-10-02 13:03:03.890 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:03:04 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/425658778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.344 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.347 2 DEBUG nova.virt.libvirt.vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1086725747',display_name='tempest-TestNetworkBasicOps-server-1086725747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1086725747',id=179,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPBABVkZI5Kx2o3IBmNelxKPrpcXX1o46OX/ra3kYdzmZFj/cCMhJ1511ulGrJ3qwtAcfGfzsPlSIVbMP2imMAvPUtwUpeHp534Qlat71VA1CohVAjbm/2X4YYdTo5vxIw==',key_name='tempest-TestNetworkBasicOps-1630436212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-iilx3u08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:59Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.348 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.349 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.350 2 DEBUG nova.objects.instance [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.369 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <uuid>fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13</uuid>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <name>instance-000000b3</name>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestNetworkBasicOps-server-1086725747</nova:name>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:03:03</nova:creationTime>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <nova:port uuid="4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <entry name="serial">fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13</entry>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <entry name="uuid">fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13</entry>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:30:ac:86"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <target dev="tap4fcd0b0b-1e"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/console.log" append="off"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:03:04 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:03:04 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:03:04 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:03:04 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.371 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Preparing to wait for external event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.371 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.372 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.372 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.373 2 DEBUG nova.virt.libvirt.vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1086725747',display_name='tempest-TestNetworkBasicOps-server-1086725747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1086725747',id=179,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPBABVkZI5Kx2o3IBmNelxKPrpcXX1o46OX/ra3kYdzmZFj/cCMhJ1511ulGrJ3qwtAcfGfzsPlSIVbMP2imMAvPUtwUpeHp534Qlat71VA1CohVAjbm/2X4YYdTo5vxIw==',key_name='tempest-TestNetworkBasicOps-1630436212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-iilx3u08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:59Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.373 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.374 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.375 2 DEBUG os_vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fcd0b0b-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fcd0b0b-1e, col_values=(('external_ids', {'iface-id': '4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ac:86', 'vm-uuid': 'fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:04 np0005466030 NetworkManager[44960]: <info>  [1759410184.3851] manager: (tap4fcd0b0b-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.391 2 INFO os_vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e')#033[00m
Oct  2 09:03:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.457 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.457 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.458 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:30:ac:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.459 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Using config drive#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.490 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:04 np0005466030 nova_compute[230518]: 2025-10-02 13:03:04.997 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Creating config drive at /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.003 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1kn194i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.106 2 DEBUG nova.network.neutron [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updated VIF entry in instance network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.107 2 DEBUG nova.network.neutron [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.132 2 DEBUG oslo_concurrency.lockutils [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.144 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1kn194i" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.173 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.177 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:05.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.432 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.433 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deleting local config drive /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config because it was imported into RBD.#033[00m
Oct  2 09:03:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:05.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:05 np0005466030 kernel: tap4fcd0b0b-1e: entered promiscuous mode
Oct  2 09:03:05 np0005466030 NetworkManager[44960]: <info>  [1759410185.4991] manager: (tap4fcd0b0b-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Oct  2 09:03:05 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:05Z|00730|binding|INFO|Claiming lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for this chassis.
Oct  2 09:03:05 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:05Z|00731|binding|INFO|4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8: Claiming fa:16:3e:30:ac:86 10.100.0.4
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.508 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ac:86 10.100.0.4'], port_security=['fa:16:3e:30:ac:86 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15970012-f057-462f-9dfb-1daddc0bd092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.509 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c bound to our chassis#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.511 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ddce0e-8d4d-4ad5-bbdd-14041466beb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:05 np0005466030 systemd-udevd[298681]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:03:05 np0005466030 systemd-machined[188247]: New machine qemu-85-instance-000000b3.
Oct  2 09:03:05 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:05Z|00732|binding|INFO|Setting lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 ovn-installed in OVS
Oct  2 09:03:05 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:05Z|00733|binding|INFO|Setting lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 up in Southbound
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:05 np0005466030 systemd[1]: Started Virtual Machine qemu-85-instance-000000b3.
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.557 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c19809f4-d57b-4a40-96bc-8ed5092f1e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:05 np0005466030 NetworkManager[44960]: <info>  [1759410185.5649] device (tap4fcd0b0b-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.563 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bfdf1c-ef04-4b4e-95ff-f024ba357a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:05 np0005466030 NetworkManager[44960]: <info>  [1759410185.5658] device (tap4fcd0b0b-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.600 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bceaee2e-5ab2-4c37-9c6d-118432a8f97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.623 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6334703-d13b-4de9-9289-b872cb470dc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298692, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.645 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1d44f40c-9eee-4c0e-927e-e67203dd2c0f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802797, 'tstamp': 802797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298694, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802801, 'tstamp': 802801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298694, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.647 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.650 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d2f6793-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.650 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.651 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d2f6793-30, col_values=(('external_ids', {'iface-id': '0dfea1be-4d56-45ad-8b1f-483fdf57471e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.651 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.991 2 DEBUG nova.compute.manager [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.991 2 DEBUG oslo_concurrency.lockutils [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.992 2 DEBUG oslo_concurrency.lockutils [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.993 2 DEBUG oslo_concurrency.lockutils [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:05 np0005466030 nova_compute[230518]: 2025-10-02 13:03:05.993 2 DEBUG nova.compute.manager [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Processing event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:03:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:07.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.316 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410187.3157213, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.317 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Started (Lifecycle Event)#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.319 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.323 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.327 2 INFO nova.virt.libvirt.driver [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance spawned successfully.#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.328 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.346 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.352 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.356 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.357 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.357 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.357 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.358 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.358 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.390 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.390 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410187.3159416, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.390 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:03:07 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:07Z|00734|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct  2 09:03:07 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:07Z|00735|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.423 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.426 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410187.323382, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.426 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.450 2 INFO nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 8.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.451 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.460 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.466 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:03:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:07.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.506 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.578 2 INFO nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 9.40 seconds to build instance.#033[00m
Oct  2 09:03:07 np0005466030 nova_compute[230518]: 2025-10-02 13:03:07.600 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:08 np0005466030 nova_compute[230518]: 2025-10-02 13:03:08.071 2 DEBUG nova.compute.manager [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:08 np0005466030 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG oslo_concurrency.lockutils [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:08 np0005466030 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG oslo_concurrency.lockutils [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:08 np0005466030 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG oslo_concurrency.lockutils [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:08 np0005466030 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG nova.compute.manager [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] No waiting events found dispatching network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:08 np0005466030 nova_compute[230518]: 2025-10-02 13:03:08.073 2 WARNING nova.compute.manager [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received unexpected event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:03:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:09.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:09 np0005466030 nova_compute[230518]: 2025-10-02 13:03:09.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:09.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:09 np0005466030 nova_compute[230518]: 2025-10-02 13:03:09.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:11 np0005466030 nova_compute[230518]: 2025-10-02 13:03:11.003 2 DEBUG nova.compute.manager [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:11 np0005466030 nova_compute[230518]: 2025-10-02 13:03:11.003 2 DEBUG nova.compute.manager [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing instance network info cache due to event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:11 np0005466030 nova_compute[230518]: 2025-10-02 13:03:11.005 2 DEBUG oslo_concurrency.lockutils [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:11 np0005466030 nova_compute[230518]: 2025-10-02 13:03:11.006 2 DEBUG oslo_concurrency.lockutils [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:11 np0005466030 nova_compute[230518]: 2025-10-02 13:03:11.006 2 DEBUG nova.network.neutron [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:03:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:03:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:11.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:12 np0005466030 nova_compute[230518]: 2025-10-02 13:03:12.176 2 DEBUG nova.network.neutron [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updated VIF entry in instance network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:12 np0005466030 nova_compute[230518]: 2025-10-02 13:03:12.177 2 DEBUG nova.network.neutron [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:12 np0005466030 nova_compute[230518]: 2025-10-02 13:03:12.201 2 DEBUG oslo_concurrency.lockutils [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:12 np0005466030 podman[298738]: 2025-10-02 13:03:12.822373573 +0000 UTC m=+0.068801629 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 09:03:12 np0005466030 podman[298737]: 2025-10-02 13:03:12.849470674 +0000 UTC m=+0.093688382 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:03:12 np0005466030 nova_compute[230518]: 2025-10-02 13:03:12.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:13.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:14 np0005466030 nova_compute[230518]: 2025-10-02 13:03:14.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:14 np0005466030 nova_compute[230518]: 2025-10-02 13:03:14.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.078 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.078 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2532067342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.608 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.608 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.611 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.612 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.614 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.615 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.766 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.767 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3759MB free_disk=20.786266326904297GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.767 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.767 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.871 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.871 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 198c2dd4-f103-4bba-9fc3-9e41f44e465e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.872 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.872 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:03:15 np0005466030 nova_compute[230518]: 2025-10-02 13:03:15.872 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:03:16 np0005466030 nova_compute[230518]: 2025-10-02 13:03:16.057 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1071118977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:16 np0005466030 nova_compute[230518]: 2025-10-02 13:03:16.511 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:16 np0005466030 nova_compute[230518]: 2025-10-02 13:03:16.520 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:16 np0005466030 nova_compute[230518]: 2025-10-02 13:03:16.541 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:16 np0005466030 nova_compute[230518]: 2025-10-02 13:03:16.575 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:03:16 np0005466030 nova_compute[230518]: 2025-10-02 13:03:16.575 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:17.190 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:17 np0005466030 nova_compute[230518]: 2025-10-02 13:03:17.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:17.194 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:03:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:17.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:17 np0005466030 nova_compute[230518]: 2025-10-02 13:03:17.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:17.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:19 np0005466030 nova_compute[230518]: 2025-10-02 13:03:19.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:19.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:19 np0005466030 nova_compute[230518]: 2025-10-02 13:03:19.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:20 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:20Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:ac:86 10.100.0.4
Oct  2 09:03:20 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:20Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:ac:86 10.100.0.4
Oct  2 09:03:20 np0005466030 nova_compute[230518]: 2025-10-02 13:03:20.577 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:20 np0005466030 nova_compute[230518]: 2025-10-02 13:03:20.577 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:20 np0005466030 nova_compute[230518]: 2025-10-02 13:03:20.578 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:20 np0005466030 nova_compute[230518]: 2025-10-02 13:03:20.578 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:03:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:21.196 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:03:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:03:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:21.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:22 np0005466030 nova_compute[230518]: 2025-10-02 13:03:22.049 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:23.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.803 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.807 2 INFO nova.compute.manager [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Terminating instance#033[00m
Oct  2 09:03:23 np0005466030 nova_compute[230518]: 2025-10-02 13:03:23.808 2 DEBUG nova.compute.manager [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:03:23 np0005466030 podman[298825]: 2025-10-02 13:03:23.818119172 +0000 UTC m=+0.067626233 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 09:03:23 np0005466030 podman[298826]: 2025-10-02 13:03:23.829121354 +0000 UTC m=+0.071504283 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:24 np0005466030 kernel: tapb0acc3a3-80 (unregistering): left promiscuous mode
Oct  2 09:03:24 np0005466030 NetworkManager[44960]: <info>  [1759410204.1621] device (tapb0acc3a3-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:03:24 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:24Z|00736|binding|INFO|Releasing lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 from this chassis (sb_readonly=0)
Oct  2 09:03:24 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:24Z|00737|binding|INFO|Setting lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 down in Southbound
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:24Z|00738|binding|INFO|Removing iface tapb0acc3a3-80 ovn-installed in OVS
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.191 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:a1:f4 10.100.0.4'], port_security=['fa:16:3e:d4:a1:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7c31bb0f-22b5-42a4-9b38-8ad3daac689f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.192 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.194 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc67ddf-a4a6-4314-85a4-0648f0690986]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.196 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace which is not needed anymore#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Oct  2 09:03:24 np0005466030 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Consumed 19.710s CPU time.
Oct  2 09:03:24 np0005466030 systemd-machined[188247]: Machine qemu-81-instance-000000a6 terminated.
Oct  2 09:03:24 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : haproxy version is 2.8.14-c23fe91
Oct  2 09:03:24 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : path to executable is /usr/sbin/haproxy
Oct  2 09:03:24 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [WARNING]  (295710) : Exiting Master process...
Oct  2 09:03:24 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [WARNING]  (295710) : Exiting Master process...
Oct  2 09:03:24 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [ALERT]    (295710) : Current worker (295714) exited with code 143 (Terminated)
Oct  2 09:03:24 np0005466030 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [WARNING]  (295710) : All workers exited. Exiting... (0)
Oct  2 09:03:24 np0005466030 systemd[1]: libpod-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope: Deactivated successfully.
Oct  2 09:03:24 np0005466030 conmon[295686]: conmon 559ac6788ba0e6dd3f8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope/container/memory.events
Oct  2 09:03:24 np0005466030 podman[298886]: 2025-10-02 13:03:24.390657602 +0000 UTC m=+0.076233900 container died 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.396 2 DEBUG nova.compute.manager [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-unplugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.396 2 DEBUG oslo_concurrency.lockutils [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG oslo_concurrency.lockutils [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG oslo_concurrency.lockutils [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG nova.compute.manager [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] No waiting events found dispatching network-vif-unplugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG nova.compute.manager [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-unplugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.449 2 INFO nova.virt.libvirt.driver [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance destroyed successfully.#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.449 2 DEBUG nova.objects.instance [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:24 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613-userdata-shm.mount: Deactivated successfully.
Oct  2 09:03:24 np0005466030 systemd[1]: var-lib-containers-storage-overlay-e09e09a9c189154a28ffa2be38e9a5e659937380e00a8af389eda1472e8aeea1-merged.mount: Deactivated successfully.
Oct  2 09:03:24 np0005466030 podman[298886]: 2025-10-02 13:03:24.466447886 +0000 UTC m=+0.152024164 container cleanup 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:03:24 np0005466030 systemd[1]: libpod-conmon-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope: Deactivated successfully.
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.482 2 DEBUG nova.virt.libvirt.vif [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-202037004',display_name='tempest-₡-202037004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--202037004',id=166,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-cvelnv0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:38Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=7c31bb0f-22b5-42a4-9b38-8ad3daac689f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.484 2 DEBUG nova.network.os_vif_util [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.485 2 DEBUG nova.network.os_vif_util [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.486 2 DEBUG os_vif [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0acc3a3-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.497 2 INFO os_vif [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80')#033[00m
Oct  2 09:03:24 np0005466030 podman[298927]: 2025-10-02 13:03:24.553091158 +0000 UTC m=+0.063488653 container remove 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b02c0d10-b57d-46ba-a506-8480b673b5aa]: (4, ('Thu Oct  2 01:03:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613)\n559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613\nThu Oct  2 01:03:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613)\n559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.562 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d282ce24-7a84-443b-ac2f-81541d9f041a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.562 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:24 np0005466030 kernel: tap052f341a-00: left promiscuous mode
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.584 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31ebbbd3-e2e3-4a3b-abe8-59f02063d79d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.619 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43719249-31a8-4720-ade9-fab565a5f72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.621 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1ad831-c604-4441-aa99-8482097dfdc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.635 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c38bb953-cc78-44dd-800a-70ff1088c7f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790105, 'reachable_time': 25582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298958, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.637 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:03:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.637 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[22da72ca-4eeb-4a7e-8849-3c0c596b202a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:24 np0005466030 systemd[1]: run-netns-ovnmeta\x2d052f341a\x2d0628\x2d4183\x2da5e0\x2d76312bc986c6.mount: Deactivated successfully.
Oct  2 09:03:24 np0005466030 nova_compute[230518]: 2025-10-02 13:03:24.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:25 np0005466030 nova_compute[230518]: 2025-10-02 13:03:25.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:25.960 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:25.960 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.351 2 INFO nova.virt.libvirt.driver [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deleting instance files /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_del#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.351 2 INFO nova.virt.libvirt.driver [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deletion of /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_del complete#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.400 2 INFO nova.compute.manager [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 2.59 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.400 2 DEBUG oslo.service.loopingcall [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.401 2 DEBUG nova.compute.manager [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.401 2 DEBUG nova.network.neutron [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.511 2 DEBUG nova.compute.manager [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG oslo_concurrency.lockutils [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG oslo_concurrency.lockutils [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG oslo_concurrency.lockutils [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG nova.compute.manager [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] No waiting events found dispatching network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.512 2 WARNING nova.compute.manager [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received unexpected event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.678 2 INFO nova.compute.manager [None req-95dcdc4b-205b-47af-bdcf-e5c2de5460c2 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Get console output#033[00m
Oct  2 09:03:26 np0005466030 nova_compute[230518]: 2025-10-02 13:03:26.684 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:03:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:03:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:27.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:03:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:27.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.182 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.276 2 DEBUG nova.network.neutron [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.297 2 INFO nova.compute.manager [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 1.90 seconds to deallocate network for instance.#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.354 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.355 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:03:28 np0005466030 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.136 2 DEBUG nova.compute.manager [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-deleted-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.137 2 DEBUG nova.compute.manager [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.137 2 DEBUG nova.compute.manager [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.137 2 DEBUG oslo_concurrency.lockutils [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.502 2 DEBUG nova.compute.manager [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG oslo_concurrency.lockutils [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG oslo_concurrency.lockutils [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG oslo_concurrency.lockutils [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG nova.compute.manager [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.503 2 WARNING nova.compute.manager [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:03:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:29.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.732 2 INFO nova.compute.manager [None req-44a3b4f0-358e-4091-b416-2cb8ee85a85d 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Get console output#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.738 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.760 2 DEBUG oslo_concurrency.processutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:29 np0005466030 nova_compute[230518]: 2025-10-02 13:03:29.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1405131134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.190 2 DEBUG oslo_concurrency.processutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.197 2 DEBUG nova.compute.provider_tree [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.215 2 DEBUG nova.scheduler.client.report [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.246 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.281 2 INFO nova.scheduler.client.report [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.414 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.549 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.562 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.563 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.563 2 DEBUG oslo_concurrency.lockutils [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:30 np0005466030 nova_compute[230518]: 2025-10-02 13:03:30.564 2 DEBUG nova.network.neutron [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:31.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.663 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.664 2 WARNING nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.665 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.665 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.926 2 INFO nova.compute.manager [None req-75f385ad-f632-411b-9f94-bbbb8d8fc619 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Get console output#033[00m
Oct  2 09:03:31 np0005466030 nova_compute[230518]: 2025-10-02 13:03:31.932 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  2 09:03:32 np0005466030 nova_compute[230518]: 2025-10-02 13:03:32.290 2 DEBUG nova.network.neutron [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:32 np0005466030 nova_compute[230518]: 2025-10-02 13:03:32.291 2 DEBUG nova.network.neutron [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:32 np0005466030 nova_compute[230518]: 2025-10-02 13:03:32.321 2 DEBUG oslo_concurrency.lockutils [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:32 np0005466030 nova_compute[230518]: 2025-10-02 13:03:32.322 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:32 np0005466030 nova_compute[230518]: 2025-10-02 13:03:32.322 2 DEBUG nova.network.neutron [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:33.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:33.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.735 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.737 2 WARNING nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.738 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.738 2 WARNING nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.906 2 DEBUG nova.compute.manager [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.907 2 DEBUG nova.compute.manager [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing instance network info cache due to event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.907 2 DEBUG oslo_concurrency.lockutils [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.908 2 DEBUG oslo_concurrency.lockutils [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.908 2 DEBUG nova.network.neutron [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.970 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.971 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.972 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.972 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.973 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.974 2 INFO nova.compute.manager [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Terminating instance#033[00m
Oct  2 09:03:33 np0005466030 nova_compute[230518]: 2025-10-02 13:03:33.976 2 DEBUG nova.compute.manager [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:03:34 np0005466030 kernel: tap4fcd0b0b-1e (unregistering): left promiscuous mode
Oct  2 09:03:34 np0005466030 NetworkManager[44960]: <info>  [1759410214.0404] device (tap4fcd0b0b-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:03:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:34Z|00739|binding|INFO|Releasing lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 from this chassis (sb_readonly=0)
Oct  2 09:03:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:34Z|00740|binding|INFO|Setting lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 down in Southbound
Oct  2 09:03:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:34Z|00741|binding|INFO|Removing iface tap4fcd0b0b-1e ovn-installed in OVS
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.072 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ac:86 10.100.0.4'], port_security=['fa:16:3e:30:ac:86 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15970012-f057-462f-9dfb-1daddc0bd092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.073 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c unbound from our chassis#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.075 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.097 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe33712-4c72-4eb0-916d-4f52a4653d09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:34 np0005466030 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Oct  2 09:03:34 np0005466030 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b3.scope: Consumed 13.955s CPU time.
Oct  2 09:03:34 np0005466030 systemd-machined[188247]: Machine qemu-85-instance-000000b3 terminated.
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.135 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0f03bf-9d77-420e-b345-aa80e3526e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.140 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[15c9f1fc-9e1e-4773-a5be-929d3ff3d32c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.174 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34b0fbd5-d256-483f-9072-f2f396cef962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.198 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f853f182-dbce-4943-9bb2-d95b94ee9a2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298994, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.216 2 INFO nova.virt.libvirt.driver [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance destroyed successfully.#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.217 2 DEBUG nova.objects.instance [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.231 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c75080f5-ab36-4552-aa70-63cb7b1df370]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802797, 'tstamp': 802797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299001, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802801, 'tstamp': 802801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299001, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.233 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.238 2 DEBUG nova.virt.libvirt.vif [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1086725747',display_name='tempest-TestNetworkBasicOps-server-1086725747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1086725747',id=179,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPBABVkZI5Kx2o3IBmNelxKPrpcXX1o46OX/ra3kYdzmZFj/cCMhJ1511ulGrJ3qwtAcfGfzsPlSIVbMP2imMAvPUtwUpeHp534Qlat71VA1CohVAjbm/2X4YYdTo5vxIw==',key_name='tempest-TestNetworkBasicOps-1630436212',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:03:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-iilx3u08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:03:07Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.239 2 DEBUG nova.network.os_vif_util [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.240 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d2f6793-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.241 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.241 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d2f6793-30, col_values=(('external_ids', {'iface-id': '0dfea1be-4d56-45ad-8b1f-483fdf57471e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.242 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.242 2 DEBUG nova.network.os_vif_util [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.243 2 DEBUG os_vif [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fcd0b0b-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.254 2 INFO os_vif [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e')#033[00m
Oct  2 09:03:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.735 2 DEBUG nova.network.neutron [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.736 2 DEBUG nova.network.neutron [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.756 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:34 np0005466030 nova_compute[230518]: 2025-10-02 13:03:34.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:35.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:35 np0005466030 nova_compute[230518]: 2025-10-02 13:03:35.703 2 INFO nova.virt.libvirt.driver [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deleting instance files /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_del#033[00m
Oct  2 09:03:35 np0005466030 nova_compute[230518]: 2025-10-02 13:03:35.704 2 INFO nova.virt.libvirt.driver [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deletion of /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_del complete#033[00m
Oct  2 09:03:35 np0005466030 nova_compute[230518]: 2025-10-02 13:03:35.765 2 INFO nova.compute.manager [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 1.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:03:35 np0005466030 nova_compute[230518]: 2025-10-02 13:03:35.766 2 DEBUG oslo.service.loopingcall [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:03:35 np0005466030 nova_compute[230518]: 2025-10-02 13:03:35.767 2 DEBUG nova.compute.manager [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:03:35 np0005466030 nova_compute[230518]: 2025-10-02 13:03:35.767 2 DEBUG nova.network.neutron [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.231 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-unplugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.232 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.233 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.233 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.233 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] No waiting events found dispatching network-vif-unplugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.234 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-unplugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.234 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.235 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.235 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.236 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.236 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] No waiting events found dispatching network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.236 2 WARNING nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received unexpected event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.407 2 DEBUG nova.network.neutron [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updated VIF entry in instance network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.408 2 DEBUG nova.network.neutron [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.432 2 DEBUG oslo_concurrency.lockutils [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.874 2 DEBUG nova.network.neutron [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.902 2 INFO nova.compute.manager [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 1.14 seconds to deallocate network for instance.#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.978 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:36 np0005466030 nova_compute[230518]: 2025-10-02 13:03:36.978 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:36 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:36Z|00742|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.030 2 DEBUG nova.compute.manager [req-9de1695c-b5e5-4b3f-9393-3e5a8613661c req-eba9d196-0d5d-4142-bd8c-c066f9813f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-deleted-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.139 2 DEBUG oslo_concurrency.processutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:37.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:37.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4018230026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.590 2 DEBUG oslo_concurrency.processutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.597 2 DEBUG nova.compute.provider_tree [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.616 2 DEBUG nova.scheduler.client.report [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.635 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.785 2 INFO nova.scheduler.client.report [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13#033[00m
Oct  2 09:03:37 np0005466030 nova_compute[230518]: 2025-10-02 13:03:37.881 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:39 np0005466030 nova_compute[230518]: 2025-10-02 13:03:39.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:39 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:39Z|00743|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct  2 09:03:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:39 np0005466030 nova_compute[230518]: 2025-10-02 13:03:39.448 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410204.445765, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:39 np0005466030 nova_compute[230518]: 2025-10-02 13:03:39.449 2 INFO nova.compute.manager [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:03:39 np0005466030 nova_compute[230518]: 2025-10-02 13:03:39.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:39 np0005466030 nova_compute[230518]: 2025-10-02 13:03:39.472 2 DEBUG nova.compute.manager [None req-292c8a2a-4f4a-4977-85c7-1033d0dac31d - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:39.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:39 np0005466030 nova_compute[230518]: 2025-10-02 13:03:39.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.565 2 DEBUG nova.compute.manager [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.565 2 DEBUG nova.compute.manager [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.566 2 DEBUG oslo_concurrency.lockutils [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.566 2 DEBUG oslo_concurrency.lockutils [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.566 2 DEBUG nova.network.neutron [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.643 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.644 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.644 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.644 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.645 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.646 2 INFO nova.compute.manager [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Terminating instance#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.648 2 DEBUG nova.compute.manager [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:03:40 np0005466030 kernel: tap075c87dd-2b (unregistering): left promiscuous mode
Oct  2 09:03:40 np0005466030 NetworkManager[44960]: <info>  [1759410220.7182] device (tap075c87dd-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:03:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:40Z|00744|binding|INFO|Releasing lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 from this chassis (sb_readonly=0)
Oct  2 09:03:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:40Z|00745|binding|INFO|Setting lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 down in Southbound
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:03:40Z|00746|binding|INFO|Removing iface tap075c87dd-2b ovn-installed in OVS
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:40 np0005466030 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Oct  2 09:03:40 np0005466030 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b0.scope: Consumed 15.452s CPU time.
Oct  2 09:03:40 np0005466030 systemd-machined[188247]: Machine qemu-84-instance-000000b0 terminated.
Oct  2 09:03:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.822 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:79:9d 10.100.0.10'], port_security=['fa:16:3e:d9:79:9d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '198c2dd4-f103-4bba-9fc3-9e41f44e465e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9eb88c13-ce55-413a-bc29-2cb1397ffc60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=075c87dd-2b98-4364-9955-b21fcbcd5b47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:03:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.823 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 075c87dd-2b98-4364-9955-b21fcbcd5b47 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c unbound from our chassis#033[00m
Oct  2 09:03:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.824 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:03:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.825 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2143054a-11af-4823-adf0-acaec64fca88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.826 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c namespace which is not needed anymore#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.880 2 INFO nova.virt.libvirt.driver [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance destroyed successfully.#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.881 2 DEBUG nova.objects.instance [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.900 2 DEBUG nova.virt.libvirt.vif [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-511078647',display_name='tempest-TestNetworkBasicOps-server-511078647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-511078647',id=176,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx6+OnWya4TKWI602K7FJTy0vvTR15qcn2a79LYqYLs4i+5cL4NrJf7MAy0xx98Y2Lu4xFova8uQh2TX9Sp+hRCxqeORgezwsMfN18SQyhFQii2RX1Yt01r5EbD581/cA==',key_name='tempest-TestNetworkBasicOps-68926903',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-a1p0qs88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:45Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=198c2dd4-f103-4bba-9fc3-9e41f44e465e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.901 2 DEBUG nova.network.os_vif_util [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.902 2 DEBUG nova.network.os_vif_util [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.903 2 DEBUG os_vif [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap075c87dd-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:03:40 np0005466030 nova_compute[230518]: 2025-10-02 13:03:40.913 2 INFO os_vif [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b')#033[00m
Oct  2 09:03:41 np0005466030 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : haproxy version is 2.8.14-c23fe91
Oct  2 09:03:41 np0005466030 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : path to executable is /usr/sbin/haproxy
Oct  2 09:03:41 np0005466030 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [WARNING]  (298253) : Exiting Master process...
Oct  2 09:03:41 np0005466030 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [ALERT]    (298253) : Current worker (298255) exited with code 143 (Terminated)
Oct  2 09:03:41 np0005466030 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [WARNING]  (298253) : All workers exited. Exiting... (0)
Oct  2 09:03:41 np0005466030 systemd[1]: libpod-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope: Deactivated successfully.
Oct  2 09:03:41 np0005466030 conmon[298249]: conmon dc7f32dc3d576e8fd343 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope/container/memory.events
Oct  2 09:03:41 np0005466030 podman[299084]: 2025-10-02 13:03:41.042288167 +0000 UTC m=+0.118682367 container died dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.176 2 DEBUG nova.compute.manager [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.177 2 DEBUG oslo_concurrency.lockutils [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.178 2 DEBUG oslo_concurrency.lockutils [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.178 2 DEBUG oslo_concurrency.lockutils [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.179 2 DEBUG nova.compute.manager [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.180 2 DEBUG nova.compute.manager [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:03:41 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b-userdata-shm.mount: Deactivated successfully.
Oct  2 09:03:41 np0005466030 systemd[1]: var-lib-containers-storage-overlay-f0ed691c9efbcda2feeff6f5bd8ee2f1b7fe6192de23eebb1b94af40b0e8291e-merged.mount: Deactivated successfully.
Oct  2 09:03:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:41.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:41 np0005466030 podman[299084]: 2025-10-02 13:03:41.490849355 +0000 UTC m=+0.567243535 container cleanup dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:03:41 np0005466030 systemd[1]: libpod-conmon-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope: Deactivated successfully.
Oct  2 09:03:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:41.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:41 np0005466030 podman[299130]: 2025-10-02 13:03:41.698606231 +0000 UTC m=+0.182916754 container remove dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.704 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c60f82cb-1478-478c-ba74-bfa77497215d]: (4, ('Thu Oct  2 01:03:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c (dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b)\ndc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b\nThu Oct  2 01:03:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c (dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b)\ndc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.706 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[398b9b23-7554-4e96-b31c-1cd8554df2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.706 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:41 np0005466030 kernel: tap0d2f6793-30: left promiscuous mode
Oct  2 09:03:41 np0005466030 nova_compute[230518]: 2025-10-02 13:03:41.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.723 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5e3d2c-dc6b-4ead-b162-9ec2948cd943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.752 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91b88101-da18-4199-83b9-7e4ad7635b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.754 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bcae6337-edd6-4415-b832-26798dd943cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.773 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43ef7fd2-2a82-4150-87e0-2b7450fae15e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802779, 'reachable_time': 27329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299144, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.775 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:03:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.775 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[43d5f045-8af0-40ab-b718-0eb1eaf45e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:03:41 np0005466030 systemd[1]: run-netns-ovnmeta\x2d0d2f6793\x2d3f74\x2d40a0\x2db15c\x2d09282dcbf27c.mount: Deactivated successfully.
Oct  2 09:03:42 np0005466030 nova_compute[230518]: 2025-10-02 13:03:42.767 2 DEBUG nova.network.neutron [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:42 np0005466030 nova_compute[230518]: 2025-10-02 13:03:42.769 2 DEBUG nova.network.neutron [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:42 np0005466030 nova_compute[230518]: 2025-10-02 13:03:42.793 2 DEBUG oslo_concurrency.lockutils [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.072094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223072120, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 684, "num_deletes": 251, "total_data_size": 1130338, "memory_usage": 1158544, "flush_reason": "Manual Compaction"}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223077017, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 745488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65297, "largest_seqno": 65976, "table_properties": {"data_size": 742207, "index_size": 1188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7812, "raw_average_key_size": 19, "raw_value_size": 735589, "raw_average_value_size": 1816, "num_data_blocks": 53, "num_entries": 405, "num_filter_entries": 405, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410179, "oldest_key_time": 1759410179, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 4953 microseconds, and 2801 cpu microseconds.
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.077045) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 745488 bytes OK
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.077062) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078292) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078304) EVENT_LOG_v1 {"time_micros": 1759410223078300, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078317) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1126607, prev total WAL file size 1126607, number of live WAL files 2.
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078829) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(728KB)], [132(11MB)]
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223078918, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 12558564, "oldest_snapshot_seqno": -1}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 8602 keys, 10708100 bytes, temperature: kUnknown
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223204963, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 10708100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10653069, "index_size": 32390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 227720, "raw_average_key_size": 26, "raw_value_size": 10502510, "raw_average_value_size": 1220, "num_data_blocks": 1230, "num_entries": 8602, "num_filter_entries": 8602, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.205405) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10708100 bytes
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.219589) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.7 rd, 85.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(31.2) write-amplify(14.4) OK, records in: 9112, records dropped: 510 output_compression: NoCompression
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.219634) EVENT_LOG_v1 {"time_micros": 1759410223219617, "job": 84, "event": "compaction_finished", "compaction_time_micros": 125978, "compaction_time_cpu_micros": 51665, "output_level": 6, "num_output_files": 1, "total_output_size": 10708100, "num_input_records": 9112, "num_output_records": 8602, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223220018, "job": 84, "event": "table_file_deletion", "file_number": 134}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223222261, "job": 84, "event": "table_file_deletion", "file_number": 132}
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.263 2 INFO nova.virt.libvirt.driver [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deleting instance files /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e_del#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.264 2 INFO nova.virt.libvirt.driver [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deletion of /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e_del complete#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.292 2 DEBUG nova.compute.manager [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.292 2 DEBUG oslo_concurrency.lockutils [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.293 2 DEBUG oslo_concurrency.lockutils [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.293 2 DEBUG oslo_concurrency.lockutils [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.293 2 DEBUG nova.compute.manager [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.293 2 WARNING nova.compute.manager [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.317 2 INFO nova.compute.manager [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 2.67 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.318 2 DEBUG oslo.service.loopingcall [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.319 2 DEBUG nova.compute.manager [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.319 2 DEBUG nova.network.neutron [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:03:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:43.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:03:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:43.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:03:43 np0005466030 nova_compute[230518]: 2025-10-02 13:03:43.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:43 np0005466030 podman[299149]: 2025-10-02 13:03:43.816604649 +0000 UTC m=+0.064770423 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 09:03:43 np0005466030 podman[299148]: 2025-10-02 13:03:43.888654788 +0000 UTC m=+0.128759631 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.043 2 DEBUG nova.network.neutron [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.071 2 INFO nova.compute.manager [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 0.75 seconds to deallocate network for instance.#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.100 2 DEBUG nova.compute.manager [req-adef30cb-7230-439e-9e4c-bb957dad98b7 req-ca4ac2f1-21ea-43f8-bf3a-2dbbbd2027d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-deleted-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.120 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.121 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.184 2 DEBUG oslo_concurrency.processutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3787610328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.616 2 DEBUG oslo_concurrency.processutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.625 2 DEBUG nova.compute.provider_tree [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.650 2 DEBUG nova.scheduler.client.report [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.676 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.703 2 INFO nova.scheduler.client.report [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 198c2dd4-f103-4bba-9fc3-9e41f44e465e#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:44 np0005466030 nova_compute[230518]: 2025-10-02 13:03:44.783 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:45.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:45.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:45 np0005466030 nova_compute[230518]: 2025-10-02 13:03:45.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:47.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:47.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:48 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:03:48 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:03:48 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:03:49 np0005466030 nova_compute[230518]: 2025-10-02 13:03:49.214 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410214.2134264, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:49 np0005466030 nova_compute[230518]: 2025-10-02 13:03:49.215 2 INFO nova.compute.manager [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:03:49 np0005466030 nova_compute[230518]: 2025-10-02 13:03:49.238 2 DEBUG nova.compute.manager [None req-ff504c11-b5e1-41cb-970b-8a5848d9233c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:49.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:49.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:49 np0005466030 nova_compute[230518]: 2025-10-02 13:03:49.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:50 np0005466030 nova_compute[230518]: 2025-10-02 13:03:50.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:51.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:51.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.686 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.687 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.702 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.773 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.773 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.779 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.780 2 INFO nova.compute.claims [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:03:52 np0005466030 nova_compute[230518]: 2025-10-02 13:03:52.899 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:03:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1157188128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.322 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.329 2 DEBUG nova.compute.provider_tree [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.348 2 DEBUG nova.scheduler.client.report [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.378 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.378 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:03:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:53.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.424 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.425 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.444 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.464 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.570 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.572 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.572 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating image(s)#033[00m
Oct  2 09:03:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:53.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.596 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.625 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.656 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.661 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.727 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.728 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.729 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.729 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.750 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.754 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ea034622-0a48-4de6-8d68-0f2240b54214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:53 np0005466030 nova_compute[230518]: 2025-10-02 13:03:53.781 2 DEBUG nova.policy [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5206d24fd75a48758994a57e7fd259f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52dd3c4419794d0fbecd536c5088c60f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:03:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:54 np0005466030 nova_compute[230518]: 2025-10-02 13:03:54.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:54 np0005466030 podman[299459]: 2025-10-02 13:03:54.847706838 +0000 UTC m=+0.103879458 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:03:54 np0005466030 podman[299460]: 2025-10-02 13:03:54.863753257 +0000 UTC m=+0.109801493 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:03:55 np0005466030 nova_compute[230518]: 2025-10-02 13:03:55.027 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Successfully created port: 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:03:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:55 np0005466030 nova_compute[230518]: 2025-10-02 13:03:55.408 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ea034622-0a48-4de6-8d68-0f2240b54214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:55 np0005466030 nova_compute[230518]: 2025-10-02 13:03:55.467 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] resizing rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:03:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:55.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:03:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:03:55 np0005466030 nova_compute[230518]: 2025-10-02 13:03:55.878 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410220.8763592, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:03:55 np0005466030 nova_compute[230518]: 2025-10-02 13:03:55.878 2 INFO nova.compute.manager [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:03:55 np0005466030 nova_compute[230518]: 2025-10-02 13:03:55.900 2 DEBUG nova.compute.manager [None req-da9cd9dd-dcc3-4f26-bee5-e61a4ccb0586 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:03:55 np0005466030 nova_compute[230518]: 2025-10-02 13:03:55.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.282 2 DEBUG nova.objects.instance [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'migration_context' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.297 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.298 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Ensure instance console log exists: /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.298 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.299 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.299 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.950 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Successfully updated port: 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.983 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.983 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:56 np0005466030 nova_compute[230518]: 2025-10-02 13:03:56.984 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:03:57 np0005466030 nova_compute[230518]: 2025-10-02 13:03:57.110 2 DEBUG nova.compute.manager [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:03:57 np0005466030 nova_compute[230518]: 2025-10-02 13:03:57.110 2 DEBUG nova.compute.manager [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing instance network info cache due to event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:03:57 np0005466030 nova_compute[230518]: 2025-10-02 13:03:57.110 2 DEBUG oslo_concurrency.lockutils [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:03:57 np0005466030 nova_compute[230518]: 2025-10-02 13:03:57.206 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:03:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:57.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.247 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance network_info: |[{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG oslo_concurrency.lockutils [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG nova.network.neutron [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.271 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start _get_guest_xml network_info=[{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.275 2 WARNING nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.280 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.280 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.285 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.286 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.287 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.287 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.290 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.292 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:03:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2078953066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.708 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.749 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:58 np0005466030 nova_compute[230518]: 2025-10-02 13:03:58.755 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:03:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3142643056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.217 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.219 2 DEBUG nova.virt.libvirt.vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:03:53Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.220 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.220 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.221 2 DEBUG nova.objects.instance [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_devices' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.235 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <uuid>ea034622-0a48-4de6-8d68-0f2240b54214</uuid>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <name>instance-000000b5</name>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersNegativeTestJSON-server-881712342</nova:name>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:03:58</nova:creationTime>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:user uuid="5206d24fd75a48758994a57e7fd259f2">tempest-ServersNegativeTestJSON-1205930452-project-member</nova:user>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:project uuid="52dd3c4419794d0fbecd536c5088c60f">tempest-ServersNegativeTestJSON-1205930452</nova:project>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <nova:port uuid="55d951c1-1ce9-4d4a-979c-9be9aef7e283">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <entry name="serial">ea034622-0a48-4de6-8d68-0f2240b54214</entry>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <entry name="uuid">ea034622-0a48-4de6-8d68-0f2240b54214</entry>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk.config">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:e8:64:21"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <target dev="tap55d951c1-1c"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/console.log" append="off"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:03:59 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:03:59 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:03:59 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:03:59 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.236 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Preparing to wait for external event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.237 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.237 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.237 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.238 2 DEBUG nova.virt.libvirt.vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:03:53Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.238 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.238 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.239 2 DEBUG os_vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.244 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d951c1-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55d951c1-1c, col_values=(('external_ids', {'iface-id': '55d951c1-1ce9-4d4a-979c-9be9aef7e283', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:64:21', 'vm-uuid': 'ea034622-0a48-4de6-8d68-0f2240b54214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:03:59 np0005466030 NetworkManager[44960]: <info>  [1759410239.2485] manager: (tap55d951c1-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.257 2 INFO os_vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c')#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.314 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.315 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.315 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No VIF found with MAC fa:16:3e:e8:64:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.316 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Using config drive#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.353 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:59.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.567 2 DEBUG nova.network.neutron [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updated VIF entry in instance network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.568 2 DEBUG nova.network.neutron [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.583 2 DEBUG oslo_concurrency.lockutils [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:03:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:03:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:03:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.760 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating config drive at /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.769 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfr6dbiyj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.931 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfr6dbiyj" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.961 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:03:59 np0005466030 nova_compute[230518]: 2025-10-02 13:03:59.964 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config ea034622-0a48-4de6-8d68-0f2240b54214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.404 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config ea034622-0a48-4de6-8d68-0f2240b54214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.405 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deleting local config drive /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config because it was imported into RBD.#033[00m
Oct  2 09:04:01 np0005466030 kernel: tap55d951c1-1c: entered promiscuous mode
Oct  2 09:04:01 np0005466030 NetworkManager[44960]: <info>  [1759410241.4751] manager: (tap55d951c1-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:01Z|00747|binding|INFO|Claiming lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 for this chassis.
Oct  2 09:04:01 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:01Z|00748|binding|INFO|55d951c1-1ce9-4d4a-979c-9be9aef7e283: Claiming fa:16:3e:e8:64:21 10.100.0.3
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.493 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.496 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 bound to our chassis#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.500 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224#033[00m
Oct  2 09:04:01 np0005466030 systemd-udevd[299759]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:04:01 np0005466030 systemd-machined[188247]: New machine qemu-86-instance-000000b5.
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.522 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49444a95-bad8-4519-abd6-67a6ce6403fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.524 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb07d0c6a-51 in ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.527 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb07d0c6a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.527 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60c98234-b684-48ec-b0d6-a3266d75347f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 NetworkManager[44960]: <info>  [1759410241.5288] device (tap55d951c1-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:04:01 np0005466030 NetworkManager[44960]: <info>  [1759410241.5297] device (tap55d951c1-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.530 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac345e59-0437-43ce-85ed-ca871b0d6cd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.551 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[799eb589-f428-4622-923d-5fdef5530ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466030 systemd[1]: Started Virtual Machine qemu-86-instance-000000b5.
Oct  2 09:04:01 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:01Z|00749|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 ovn-installed in OVS
Oct  2 09:04:01 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:01Z|00750|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 up in Southbound
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.580 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d93bb357-be68-4998-bb97-f76180112d54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:01.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.618 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcd37d8-c58a-4dca-bdb9-5ecc6d03b07a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 NetworkManager[44960]: <info>  [1759410241.6256] manager: (tapb07d0c6a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.625 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1633adf1-4a39-467d-8eba-83b68d2ce3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.672 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[03b310dc-ffae-4753-bc6c-51fa88267380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.676 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c55bcc8-e28d-4c69-b720-7a6c2285ccf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 NetworkManager[44960]: <info>  [1759410241.7021] device (tapb07d0c6a-50): carrier: link connected
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.709 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[32389b2d-f085-47b6-9943-eeefe6851226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57194fe2-28d8-439f-87ad-bab25362d2c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299792, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.752 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9238ef81-b963-42f6-a6ca-132ceb369356]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:4808'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810526, 'tstamp': 810526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299793, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.774 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d37f63f0-bcd9-477b-b891-191948d6d36c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299794, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.819 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e58bcb-2fa5-467b-ac5d-12cf536ac50b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[096ecf0c-41b8-4613-b44b-0cc6e5c8b1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.882 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.882 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.883 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.883 2 DEBUG nova.compute.manager [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.883 2 DEBUG oslo_concurrency.lockutils [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.883 2 DEBUG oslo_concurrency.lockutils [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.884 2 DEBUG oslo_concurrency.lockutils [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.884 2 DEBUG nova.compute.manager [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Processing event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466030 kernel: tapb07d0c6a-50: entered promiscuous mode
Oct  2 09:04:01 np0005466030 NetworkManager[44960]: <info>  [1759410241.8862] manager: (tapb07d0c6a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.888 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:01 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:01Z|00751|binding|INFO|Releasing lport 874a9fce-3ef5-498a-a977-43087c73ea46 from this chassis (sb_readonly=0)
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.906 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:04:01 np0005466030 nova_compute[230518]: 2025-10-02 13:04:01.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.907 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbbb28e-e294-48eb-8c35-3ed1f4aa03a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.908 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:04:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.909 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'env', 'PROCESS_TAG=haproxy-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b07d0c6a-5988-4afb-b4ba-d4048578b224.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:04:02 np0005466030 podman[299868]: 2025-10-02 13:04:02.279751591 +0000 UTC m=+0.072361110 container create 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:04:02 np0005466030 podman[299868]: 2025-10-02 13:04:02.228335083 +0000 UTC m=+0.020944622 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:04:02 np0005466030 systemd[1]: Started libpod-conmon-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2.scope.
Oct  2 09:04:02 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:04:02 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afce76e5b3e048a73313ef5d27cfa735799b4e55d4b12602e339b371cc625d3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:04:02 np0005466030 podman[299868]: 2025-10-02 13:04:02.368164938 +0000 UTC m=+0.160774477 container init 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:04:02 np0005466030 podman[299868]: 2025-10-02 13:04:02.373135142 +0000 UTC m=+0.165744661 container start 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 09:04:02 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : New worker (299889) forked
Oct  2 09:04:02 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : Loading success.
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.459 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.460 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410242.4603882, ea034622-0a48-4de6-8d68-0f2240b54214 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.461 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Started (Lifecycle Event)#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.463 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.468 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance spawned successfully.#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.468 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.489 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.494 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.497 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.498 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.498 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.498 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.499 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.499 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.551 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.551 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410242.461399, ea034622-0a48-4de6-8d68-0f2240b54214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.552 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.602 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.605 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410242.4638412, ea034622-0a48-4de6-8d68-0f2240b54214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.605 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.621 2 INFO nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Took 9.05 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.622 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.631 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.634 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.662 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.690 2 INFO nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Took 9.94 seconds to build instance.#033[00m
Oct  2 09:04:02 np0005466030 nova_compute[230518]: 2025-10-02 13:04:02.720 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:03.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:03.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.096 2 DEBUG nova.compute.manager [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.096 2 DEBUG oslo_concurrency.lockutils [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.096 2 DEBUG oslo_concurrency.lockutils [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.097 2 DEBUG oslo_concurrency.lockutils [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.097 2 DEBUG nova.compute.manager [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.097 2 WARNING nova.compute.manager [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:04 np0005466030 nova_compute[230518]: 2025-10-02 13:04:04.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:04:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:04:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:04:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:04:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:05.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:05.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:07.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:07.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.126 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.127 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.144 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.226 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.227 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.234 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.235 2 INFO nova.compute.claims [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.355 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1182037887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.825 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.832 2 DEBUG nova.compute.provider_tree [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.875 2 DEBUG nova.scheduler.client.report [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.921 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.921 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.975 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.976 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:04:09 np0005466030 nova_compute[230518]: 2025-10-02 13:04:09.998 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.023 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.109 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.110 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.110 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Creating image(s)#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.135 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.161 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.187 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.190 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.218 2 DEBUG nova.policy [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5206d24fd75a48758994a57e7fd259f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52dd3c4419794d0fbecd536c5088c60f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.253 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.254 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.254 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.254 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.285 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.289 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.573 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.648 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] resizing rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.764 2 DEBUG nova.objects.instance [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'migration_context' on Instance uuid ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.780 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.780 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Ensure instance console log exists: /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.781 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.781 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:10 np0005466030 nova_compute[230518]: 2025-10-02 13:04:10.781 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:11.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:12 np0005466030 nova_compute[230518]: 2025-10-02 13:04:12.194 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Successfully created port: b3e07905-2e01-4835-9249-b8d3a5c67f76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:04:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:13.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.290 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Successfully updated port: b3e07905-2e01-4835-9249-b8d3a5c67f76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.302 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.302 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.302 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.389 2 DEBUG nova.compute.manager [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-changed-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.390 2 DEBUG nova.compute.manager [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Refreshing instance network info cache due to event network-changed-b3e07905-2e01-4835-9249-b8d3a5c67f76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.390 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:04:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.455 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:04:14 np0005466030 nova_compute[230518]: 2025-10-02 13:04:14.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:14 np0005466030 podman[300087]: 2025-10-02 13:04:14.825030777 +0000 UTC m=+0.068111677 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:04:14 np0005466030 podman[300086]: 2025-10-02 13:04:14.863265935 +0000 UTC m=+0.104092815 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.292 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updating instance_info_cache with network_info: [{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.340 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.341 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance network_info: |[{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.341 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.341 2 DEBUG nova.network.neutron [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Refreshing network info cache for port b3e07905-2e01-4835-9249-b8d3a5c67f76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.344 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start _get_guest_xml network_info=[{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.350 2 WARNING nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.355 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.356 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.359 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.360 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.363 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.363 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.365 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:15.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:15 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:15Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:64:21 10.100.0.3
Oct  2 09:04:15 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:15Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:64:21 10.100.0.3
Oct  2 09:04:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:15.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1864796394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.791 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.818 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:15 np0005466030 nova_compute[230518]: 2025-10-02 13:04:15.821 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:04:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2649936971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.247 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.249 2 DEBUG nova.virt.libvirt.vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1005291499',display_name='tempest-ServersNegativeTestJSON-server-1005291499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1005291499',id=184,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-vr00cw3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:10Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ab1d03d2-f5f1-479c-9c49-4519bb6f6b53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.250 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.251 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.252 2 DEBUG nova.objects.instance [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_devices' on Instance uuid ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.265 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <uuid>ab1d03d2-f5f1-479c-9c49-4519bb6f6b53</uuid>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <name>instance-000000b8</name>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <nova:name>tempest-ServersNegativeTestJSON-server-1005291499</nova:name>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:04:15</nova:creationTime>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:user uuid="5206d24fd75a48758994a57e7fd259f2">tempest-ServersNegativeTestJSON-1205930452-project-member</nova:user>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:project uuid="52dd3c4419794d0fbecd536c5088c60f">tempest-ServersNegativeTestJSON-1205930452</nova:project>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <nova:port uuid="b3e07905-2e01-4835-9249-b8d3a5c67f76">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <entry name="serial">ab1d03d2-f5f1-479c-9c49-4519bb6f6b53</entry>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <entry name="uuid">ab1d03d2-f5f1-479c-9c49-4519bb6f6b53</entry>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:6b:50:dc"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <target dev="tapb3e07905-2e"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/console.log" append="off"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:04:16 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:04:16 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:04:16 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:04:16 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Preparing to wait for external event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.267 2 DEBUG nova.virt.libvirt.vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1005291499',display_name='tempest-ServersNegativeTestJSON-server-1005291499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1005291499',id=184,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-vr00cw3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:10Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ab1d03d2-f5f1-479c-9c49-4519bb6f6b53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.267 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.267 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.268 2 DEBUG os_vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e07905-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e07905-2e, col_values=(('external_ids', {'iface-id': 'b3e07905-2e01-4835-9249-b8d3a5c67f76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:50:dc', 'vm-uuid': 'ab1d03d2-f5f1-479c-9c49-4519bb6f6b53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:16 np0005466030 NetworkManager[44960]: <info>  [1759410256.2745] manager: (tapb3e07905-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.280 2 INFO os_vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e')#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.332 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.332 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.332 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No VIF found with MAC fa:16:3e:6b:50:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.333 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Using config drive#033[00m
Oct  2 09:04:16 np0005466030 nova_compute[230518]: 2025-10-02 13:04:16.361 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.026 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Creating config drive at /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.030 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2sjzgamx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.058 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.165 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2sjzgamx" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.195 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.199 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.363 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.364 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deleting local config drive /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config because it was imported into RBD.#033[00m
Oct  2 09:04:17 np0005466030 kernel: tapb3e07905-2e: entered promiscuous mode
Oct  2 09:04:17 np0005466030 NetworkManager[44960]: <info>  [1759410257.4119] manager: (tapb3e07905-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Oct  2 09:04:17 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:17Z|00752|binding|INFO|Claiming lport b3e07905-2e01-4835-9249-b8d3a5c67f76 for this chassis.
Oct  2 09:04:17 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:17Z|00753|binding|INFO|b3e07905-2e01-4835-9249-b8d3a5c67f76: Claiming fa:16:3e:6b:50:dc 10.100.0.9
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.419 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:50:dc 10.100.0.9'], port_security=['fa:16:3e:6b:50:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ab1d03d2-f5f1-479c-9c49-4519bb6f6b53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b3e07905-2e01-4835-9249-b8d3a5c67f76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.420 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b3e07905-2e01-4835-9249-b8d3a5c67f76 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 bound to our chassis#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.423 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224#033[00m
Oct  2 09:04:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:17.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:17 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:17Z|00754|binding|INFO|Setting lport b3e07905-2e01-4835-9249-b8d3a5c67f76 ovn-installed in OVS
Oct  2 09:04:17 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:17Z|00755|binding|INFO|Setting lport b3e07905-2e01-4835-9249-b8d3a5c67f76 up in Southbound
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.442 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0945e480-aaab-460d-879b-ce58b7707862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:17 np0005466030 systemd-udevd[300284]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:17 np0005466030 systemd-machined[188247]: New machine qemu-87-instance-000000b8.
Oct  2 09:04:17 np0005466030 NetworkManager[44960]: <info>  [1759410257.4662] device (tapb3e07905-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:04:17 np0005466030 NetworkManager[44960]: <info>  [1759410257.4670] device (tapb3e07905-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:04:17 np0005466030 systemd[1]: Started Virtual Machine qemu-87-instance-000000b8.
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.479 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7d96c8-f8b1-46d2-beee-b44dc627cf8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.483 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9088010c-9e93-4696-8444-b7b3753cee55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.509 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e71ae27e-8f92-471e-8426-cbdf00be1998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.527 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9814d27c-dabe-4b66-8eba-2734117c1f5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300296, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1373024404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.544 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b58442-a371-4005-8762-d66e6e05eee9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810541, 'tstamp': 810541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300299, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810543, 'tstamp': 810543}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300299, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.546 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.551 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.551 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.552 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.551 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.552 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.655 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.655 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.659 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.659 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.821 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.822 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4101MB free_disk=20.84320831298828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.823 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.823 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ea034622-0a48-4de6-8d68-0f2240b54214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:04:17 np0005466030 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.038 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3310282002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.520 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.538 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.561 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.561 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.795 2 DEBUG nova.network.neutron [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updated VIF entry in instance network info cache for port b3e07905-2e01-4835-9249-b8d3a5c67f76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.796 2 DEBUG nova.network.neutron [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updating instance_info_cache with network_info: [{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:18 np0005466030 nova_compute[230518]: 2025-10-02 13:04:18.821 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.040 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410259.0395472, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.040 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Started (Lifecycle Event)#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.078 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.083 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410259.0399988, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.083 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.122 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.126 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.142 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.160 2 DEBUG nova.compute.manager [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.160 2 DEBUG oslo_concurrency.lockutils [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.161 2 DEBUG oslo_concurrency.lockutils [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.161 2 DEBUG oslo_concurrency.lockutils [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.161 2 DEBUG nova.compute.manager [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Processing event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.162 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.166 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.166 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410259.1656766, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.167 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.172 2 INFO nova.virt.libvirt.driver [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance spawned successfully.#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.172 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.203 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.210 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.214 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.215 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.215 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.216 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.216 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.216 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.318 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.354 2 INFO nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 9.24 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.356 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:19.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.433 2 INFO nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 10.23 seconds to build instance.#033[00m
Oct  2 09:04:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.450 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:19.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:19 np0005466030 nova_compute[230518]: 2025-10-02 13:04:19.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.823 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.824 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.824 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.824 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.825 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.827 2 INFO nova.compute.manager [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Terminating instance#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.828 2 DEBUG nova.compute.manager [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:04:20 np0005466030 kernel: tapb3e07905-2e (unregistering): left promiscuous mode
Oct  2 09:04:20 np0005466030 NetworkManager[44960]: <info>  [1759410260.8827] device (tapb3e07905-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:04:20 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:20Z|00756|binding|INFO|Releasing lport b3e07905-2e01-4835-9249-b8d3a5c67f76 from this chassis (sb_readonly=0)
Oct  2 09:04:20 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:20Z|00757|binding|INFO|Setting lport b3e07905-2e01-4835-9249-b8d3a5c67f76 down in Southbound
Oct  2 09:04:20 np0005466030 ovn_controller[129257]: 2025-10-02T13:04:20Z|00758|binding|INFO|Removing iface tapb3e07905-2e ovn-installed in OVS
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:20 np0005466030 nova_compute[230518]: 2025-10-02 13:04:20.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.959 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:50:dc 10.100.0.9'], port_security=['fa:16:3e:6b:50:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ab1d03d2-f5f1-479c-9c49-4519bb6f6b53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b3e07905-2e01-4835-9249-b8d3a5c67f76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.960 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b3e07905-2e01-4835-9249-b8d3a5c67f76 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 unbound from our chassis#033[00m
Oct  2 09:04:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.963 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224#033[00m
Oct  2 09:04:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.979 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea711393-5d07-4b6f-8864-775ff1e65eed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:20 np0005466030 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Oct  2 09:04:20 np0005466030 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b8.scope: Consumed 3.219s CPU time.
Oct  2 09:04:21 np0005466030 systemd-machined[188247]: Machine qemu-87-instance-000000b8 terminated.
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.011 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[98c05b4e-2980-4136-a584-7314e4ba1b8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.014 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f99427fc-912c-4bdf-a231-bcda4fea0f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.042 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba32e9f-868f-432e-8577-95c46f101082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:21 np0005466030 kernel: tapb3e07905-2e: entered promiscuous mode
Oct  2 09:04:21 np0005466030 kernel: tapb3e07905-2e (unregistering): left promiscuous mode
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.064 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0a52686c-ffee-4cd5-a8a8-471f77621cd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300381, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.074 2 INFO nova.virt.libvirt.driver [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance destroyed successfully.#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.074 2 DEBUG nova.objects.instance [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'resources' on Instance uuid ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.080 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[02264d18-660e-453b-9705-98faadfa8378]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810541, 'tstamp': 810541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300383, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810543, 'tstamp': 810543}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300383, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.082 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.088 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.111 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:04:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.113 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.181 2 DEBUG nova.virt.libvirt.vif [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1005291499',display_name='tempest-ServersNegativeTestJSON-server-1005291499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1005291499',id=184,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-vr00cw3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:19Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ab1d03d2-f5f1-479c-9c49-4519bb6f6b53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.182 2 DEBUG nova.network.os_vif_util [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.182 2 DEBUG nova.network.os_vif_util [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.182 2 DEBUG os_vif [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e07905-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.189 2 INFO os_vif [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e')#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.294 2 DEBUG nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.294 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.294 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.295 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.295 2 DEBUG nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] No waiting events found dispatching network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.295 2 WARNING nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received unexpected event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:04:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:21.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:21.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.644 2 INFO nova.virt.libvirt.driver [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deleting instance files /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_del#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.644 2 INFO nova.virt.libvirt.driver [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deletion of /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_del complete#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.703 2 INFO nova.compute.manager [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.704 2 DEBUG oslo.service.loopingcall [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.704 2 DEBUG nova.compute.manager [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:04:21 np0005466030 nova_compute[230518]: 2025-10-02 13:04:21.705 2 DEBUG nova.network.neutron [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:04:22 np0005466030 nova_compute[230518]: 2025-10-02 13:04:22.556 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:22 np0005466030 nova_compute[230518]: 2025-10-02 13:04:22.557 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:22 np0005466030 nova_compute[230518]: 2025-10-02 13:04:22.558 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:22 np0005466030 nova_compute[230518]: 2025-10-02 13:04:22.558 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:04:22 np0005466030 nova_compute[230518]: 2025-10-02 13:04:22.974 2 DEBUG nova.network.neutron [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:04:22 np0005466030 nova_compute[230518]: 2025-10-02 13:04:22.995 2 INFO nova.compute.manager [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 1.29 seconds to deallocate network for instance.#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.045 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.045 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.125 2 DEBUG nova.compute.manager [req-7a5e65bd-a1e6-44b3-b041-cd1d608ef149 req-62b24368-e034-4fb8-917d-26b15c3963c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-deleted-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.132 2 DEBUG oslo_concurrency.processutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.369 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-unplugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.375 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.376 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.376 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.376 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] No waiting events found dispatching network-vif-unplugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.377 2 WARNING nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received unexpected event network-vif-unplugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.377 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.378 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.378 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.378 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.379 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] No waiting events found dispatching network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.379 2 WARNING nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received unexpected event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:04:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:04:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/411876806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.552 2 DEBUG oslo_concurrency.processutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.557 2 DEBUG nova.compute.provider_tree [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.576 2 DEBUG nova.scheduler.client.report [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.600 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.638 2 INFO nova.scheduler.client.report [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Deleted allocations for instance ab1d03d2-f5f1-479c-9c49-4519bb6f6b53#033[00m
Oct  2 09:04:23 np0005466030 nova_compute[230518]: 2025-10-02 13:04:23.709 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:24 np0005466030 nova_compute[230518]: 2025-10-02 13:04:24.049 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:24 np0005466030 nova_compute[230518]: 2025-10-02 13:04:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:24 np0005466030 nova_compute[230518]: 2025-10-02 13:04:24.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:25 np0005466030 nova_compute[230518]: 2025-10-02 13:04:25.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:25.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:25 np0005466030 podman[300428]: 2025-10-02 13:04:25.820401225 +0000 UTC m=+0.061347057 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:04:25 np0005466030 podman[300427]: 2025-10-02 13:04:25.820745005 +0000 UTC m=+0.065974910 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 09:04:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:25.960 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:04:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:04:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:04:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:04:26.114 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:04:26 np0005466030 nova_compute[230518]: 2025-10-02 13:04:26.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:27 np0005466030 nova_compute[230518]: 2025-10-02 13:04:27.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:27.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:29.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:29.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:29 np0005466030 nova_compute[230518]: 2025-10-02 13:04:29.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:30 np0005466030 nova_compute[230518]: 2025-10-02 13:04:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:30 np0005466030 nova_compute[230518]: 2025-10-02 13:04:30.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:04:30 np0005466030 nova_compute[230518]: 2025-10-02 13:04:30.077 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:04:31 np0005466030 nova_compute[230518]: 2025-10-02 13:04:31.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:31.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:31.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:33 np0005466030 nova_compute[230518]: 2025-10-02 13:04:33.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:04:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:33.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:33.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:34 np0005466030 nova_compute[230518]: 2025-10-02 13:04:34.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:35.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:35.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:36 np0005466030 nova_compute[230518]: 2025-10-02 13:04:36.071 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410261.0696728, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:36 np0005466030 nova_compute[230518]: 2025-10-02 13:04:36.071 2 INFO nova.compute.manager [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:04:36 np0005466030 nova_compute[230518]: 2025-10-02 13:04:36.095 2 DEBUG nova.compute.manager [None req-1e77796d-45c3-4437-a95a-de7582c23f2f - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:36 np0005466030 nova_compute[230518]: 2025-10-02 13:04:36.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:37.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:39.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:39.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:39 np0005466030 nova_compute[230518]: 2025-10-02 13:04:39.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466030 nova_compute[230518]: 2025-10-02 13:04:41.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:41.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:43.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:43.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:44 np0005466030 nova_compute[230518]: 2025-10-02 13:04:44.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:45.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:45.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:45 np0005466030 podman[300467]: 2025-10-02 13:04:45.837558042 +0000 UTC m=+0.088761150 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  2 09:04:45 np0005466030 podman[300466]: 2025-10-02 13:04:45.851957369 +0000 UTC m=+0.110080632 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:04:46 np0005466030 nova_compute[230518]: 2025-10-02 13:04:46.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:47.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:47.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:48 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct  2 09:04:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:49.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:49.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:49 np0005466030 nova_compute[230518]: 2025-10-02 13:04:49.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:51 np0005466030 nova_compute[230518]: 2025-10-02 13:04:51.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:51.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:51.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:53.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:54 np0005466030 nova_compute[230518]: 2025-10-02 13:04:54.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:55.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:56 np0005466030 nova_compute[230518]: 2025-10-02 13:04:56.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:04:56 np0005466030 podman[300642]: 2025-10-02 13:04:56.809156961 +0000 UTC m=+0.065274119 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid)
Oct  2 09:04:56 np0005466030 podman[300643]: 2025-10-02 13:04:56.824874699 +0000 UTC m=+0.070316106 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:04:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:57.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:04:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:04:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:04:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:04:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:57.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.678 2 INFO nova.compute.manager [None req-1d835b92-7838-4d05-a458-99435dafbaff 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Pausing#033[00m
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.679 2 DEBUG nova.objects.instance [None req-1d835b92-7838-4d05-a458-99435dafbaff 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'flavor' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.715 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410298.7151353, ea034622-0a48-4de6-8d68-0f2240b54214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.716 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.719 2 DEBUG nova.compute.manager [None req-1d835b92-7838-4d05-a458-99435dafbaff 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.759 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.765 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:04:58 np0005466030 nova_compute[230518]: 2025-10-02 13:04:58.800 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  2 09:04:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:04:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:04:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:59.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:04:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:04:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:04:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:04:59 np0005466030 nova_compute[230518]: 2025-10-02 13:04:59.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.148 2 INFO nova.compute.manager [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Unpausing#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.149 2 DEBUG nova.objects.instance [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'flavor' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.203 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410301.2030923, ea034622-0a48-4de6-8d68-0f2240b54214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.203 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:05:01 np0005466030 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.209 2 DEBUG nova.virt.libvirt.guest [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.209 2 DEBUG nova.compute.manager [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.262 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.265 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:01 np0005466030 nova_compute[230518]: 2025-10-02 13:05:01.301 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Oct  2 09:05:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:01.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:03.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:04 np0005466030 nova_compute[230518]: 2025-10-02 13:05:04.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:05.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:05:05 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:05:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:06 np0005466030 nova_compute[230518]: 2025-10-02 13:05:06.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:05:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:07.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:05:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:05:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:05:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:09 np0005466030 nova_compute[230518]: 2025-10-02 13:05:09.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:11 np0005466030 nova_compute[230518]: 2025-10-02 13:05:11.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:11.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:05:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:05:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:13.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:14 np0005466030 nova_compute[230518]: 2025-10-02 13:05:14.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:16 np0005466030 nova_compute[230518]: 2025-10-02 13:05:16.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:16 np0005466030 podman[300733]: 2025-10-02 13:05:16.796013284 +0000 UTC m=+0.049557271 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  2 09:05:16 np0005466030 podman[300732]: 2025-10-02 13:05:16.824076436 +0000 UTC m=+0.078969705 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.093 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2041434741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:17.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.595 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.596 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:05:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:05:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:17.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.771 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.772 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4102MB free_disk=20.806453704833984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.772 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.773 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.886 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ea034622-0a48-4de6-8d68-0f2240b54214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:05:17 np0005466030 nova_compute[230518]: 2025-10-02 13:05:17.927 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/62137085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:18 np0005466030 nova_compute[230518]: 2025-10-02 13:05:18.349 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:18 np0005466030 nova_compute[230518]: 2025-10-02 13:05:18.355 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:18 np0005466030 nova_compute[230518]: 2025-10-02 13:05:18.369 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:18 np0005466030 nova_compute[230518]: 2025-10-02 13:05:18.393 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:05:18 np0005466030 nova_compute[230518]: 2025-10-02 13:05:18.393 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.147 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.148 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.179 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.309 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.309 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.314 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.314 2 INFO nova.compute.claims [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.446 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:19.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/376673178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.848 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.856 2 DEBUG nova.compute.provider_tree [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.936 2 DEBUG nova.scheduler.client.report [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.971 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:19 np0005466030 nova_compute[230518]: 2025-10-02 13:05:19.972 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.017 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.017 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.037 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.051 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.133 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.134 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.135 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Creating image(s)#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.159 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.185 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.212 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.215 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.277 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.278 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.278 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.279 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.304 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.308 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.629 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.700 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] resizing rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.796 2 DEBUG nova.objects.instance [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'migration_context' on Instance uuid f320bcaa-1dfe-4d91-bd4a-05ed389402a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.817 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.818 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Ensure instance console log exists: /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.818 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.819 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:20 np0005466030 nova_compute[230518]: 2025-10-02 13:05:20.819 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:21 np0005466030 nova_compute[230518]: 2025-10-02 13:05:21.053 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Successfully created port: 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:05:21 np0005466030 nova_compute[230518]: 2025-10-02 13:05:21.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:21.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:21.898 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:21 np0005466030 nova_compute[230518]: 2025-10-02 13:05:21.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:21 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:21.900 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:05:22 np0005466030 nova_compute[230518]: 2025-10-02 13:05:22.418 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Successfully updated port: 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:05:22 np0005466030 nova_compute[230518]: 2025-10-02 13:05:22.448 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:22 np0005466030 nova_compute[230518]: 2025-10-02 13:05:22.449 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquired lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:22 np0005466030 nova_compute[230518]: 2025-10-02 13:05:22.449 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:05:22 np0005466030 nova_compute[230518]: 2025-10-02 13:05:22.776 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:05:23 np0005466030 nova_compute[230518]: 2025-10-02 13:05:23.393 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:23 np0005466030 nova_compute[230518]: 2025-10-02 13:05:23.393 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:23 np0005466030 nova_compute[230518]: 2025-10-02 13:05:23.393 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:23 np0005466030 nova_compute[230518]: 2025-10-02 13:05:23.394 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:05:23 np0005466030 nova_compute[230518]: 2025-10-02 13:05:23.478 2 DEBUG nova.compute.manager [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-changed-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:23 np0005466030 nova_compute[230518]: 2025-10-02 13:05:23.478 2 DEBUG nova.compute.manager [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Refreshing instance network info cache due to event network-changed-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:05:23 np0005466030 nova_compute[230518]: 2025-10-02 13:05:23.478 2 DEBUG oslo_concurrency.lockutils [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:05:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:23.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:05:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.359 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updating instance_info_cache with network_info: [{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.404 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Releasing lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.405 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance network_info: |[{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.405 2 DEBUG oslo_concurrency.lockutils [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.405 2 DEBUG nova.network.neutron [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Refreshing network info cache for port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.408 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start _get_guest_xml network_info=[{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.413 2 WARNING nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.416 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.417 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.419 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.420 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.423 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.425 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:05:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2669593725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.862 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.897 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:24 np0005466030 nova_compute[230518]: 2025-10-02 13:05:24.902 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:05:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/734581899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.422 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.424 2 DEBUG nova.virt.libvirt.vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1687487968',display_name='tempest-TestServerMultinode-server-1687487968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1687487968',id=187,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-6z94ee26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:20Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=f320bcaa-1dfe-4d91-bd4a-05ed389402a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.424 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.425 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.426 2 DEBUG nova.objects.instance [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'pci_devices' on Instance uuid f320bcaa-1dfe-4d91-bd4a-05ed389402a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.439 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <uuid>f320bcaa-1dfe-4d91-bd4a-05ed389402a7</uuid>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <name>instance-000000bb</name>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestServerMultinode-server-1687487968</nova:name>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:05:24</nova:creationTime>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:user uuid="7fb7e45069d34870bc5f4fa70bd8c6de">tempest-TestServerMultinode-2060715482-project-admin</nova:user>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:project uuid="19365f54974d4109ae80bc13ac9ba55a">tempest-TestServerMultinode-2060715482</nova:project>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <nova:port uuid="5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <entry name="serial">f320bcaa-1dfe-4d91-bd4a-05ed389402a7</entry>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <entry name="uuid">f320bcaa-1dfe-4d91-bd4a-05ed389402a7</entry>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:ee:3f:1d"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <target dev="tap5b77d75e-cf"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/console.log" append="off"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:05:25 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:05:25 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:05:25 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:05:25 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Preparing to wait for external event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.441 2 DEBUG nova.virt.libvirt.vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1687487968',display_name='tempest-TestServerMultinode-server-1687487968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1687487968',id=187,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-6z94ee26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:20Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=f320bcaa-1dfe-4d91-bd4a-05ed389402a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.441 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.442 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.442 2 DEBUG os_vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b77d75e-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b77d75e-cf, col_values=(('external_ids', {'iface-id': '5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3f:1d', 'vm-uuid': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:25 np0005466030 NetworkManager[44960]: <info>  [1759410325.4483] manager: (tap5b77d75e-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.455 2 INFO os_vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf')#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.506 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.506 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.506 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No VIF found with MAC fa:16:3e:ee:3f:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.507 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Using config drive#033[00m
Oct  2 09:05:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:25.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.528 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.578 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.579 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.579 2 INFO nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Shelving#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.602 2 DEBUG nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:05:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:05:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.815 2 DEBUG nova.network.neutron [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updated VIF entry in instance network info cache for port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.816 2 DEBUG nova.network.neutron [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updating instance_info_cache with network_info: [{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:25 np0005466030 nova_compute[230518]: 2025-10-02 13:05:25.866 2 DEBUG oslo_concurrency.lockutils [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:26 np0005466030 nova_compute[230518]: 2025-10-02 13:05:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:26 np0005466030 nova_compute[230518]: 2025-10-02 13:05:26.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:26 np0005466030 nova_compute[230518]: 2025-10-02 13:05:26.104 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Creating config drive at /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config#033[00m
Oct  2 09:05:26 np0005466030 nova_compute[230518]: 2025-10-02 13:05:26.109 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0c0m_um2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:26 np0005466030 nova_compute[230518]: 2025-10-02 13:05:26.257 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0c0m_um2" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:26 np0005466030 nova_compute[230518]: 2025-10-02 13:05:26.288 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:05:26 np0005466030 nova_compute[230518]: 2025-10-02 13:05:26.291 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.278 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.279 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deleting local config drive /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config because it was imported into RBD.#033[00m
Oct  2 09:05:27 np0005466030 kernel: tap5b77d75e-cf: entered promiscuous mode
Oct  2 09:05:27 np0005466030 NetworkManager[44960]: <info>  [1759410327.3312] manager: (tap5b77d75e-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Oct  2 09:05:27 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:27Z|00759|binding|INFO|Claiming lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for this chassis.
Oct  2 09:05:27 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:27Z|00760|binding|INFO|5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3: Claiming fa:16:3e:ee:3f:1d 10.100.0.12
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.355 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.356 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 bound to our chassis#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 962339a8-ad45-401e-ae58-50cd40858566#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff420d1-b104-4cc3-84c0-1a56c0b4ed5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.370 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap962339a8-a1 in ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.372 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap962339a8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.372 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7e06b98b-81d6-4fb6-b625-95ad49660f05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.373 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f04c84-7ae1-458a-8b6d-55b79686de40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 systemd-machined[188247]: New machine qemu-88-instance-000000bb.
Oct  2 09:05:27 np0005466030 systemd-udevd[301164]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.387 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[2afba9c9-b6f6-43b0-91af-94b226404dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 NetworkManager[44960]: <info>  [1759410327.4004] device (tap5b77d75e-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:05:27 np0005466030 NetworkManager[44960]: <info>  [1759410327.4013] device (tap5b77d75e-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:05:27 np0005466030 systemd[1]: Started Virtual Machine qemu-88-instance-000000bb.
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.412 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3715af1-4dde-41e3-9fb0-30e1b5b05f0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:27Z|00761|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 ovn-installed in OVS
Oct  2 09:05:27 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:27Z|00762|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 up in Southbound
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.440 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecab23a-9e6c-406c-8283-d5492c9e1e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 NetworkManager[44960]: <info>  [1759410327.4483] manager: (tap962339a8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/354)
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.447 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48440f77-25a5-4279-8430-a4e7a2fe3a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 podman[301143]: 2025-10-02 13:05:27.456201817 +0000 UTC m=+0.089639946 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:05:27 np0005466030 podman[301144]: 2025-10-02 13:05:27.479397818 +0000 UTC m=+0.112757744 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.482 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c02ac312-485f-428c-9f72-3e1cbf8d53f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.485 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d85d3aff-3ec0-4b97-be0a-8efbfe6884f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 NetworkManager[44960]: <info>  [1759410327.5154] device (tap962339a8-a0): carrier: link connected
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.523 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[35031c23-a080-4ffe-8832-5f4210973b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.539 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a58d44ed-c082-4bb0-bd85-5addb9912f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819107, 'reachable_time': 22005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301213, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.555 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf247bd0-b207-4955-9823-6f408d0e29d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:f8da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 819107, 'tstamp': 819107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301214, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.575 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9185e8b-5171-4a1c-a5ec-e548d4301a6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819107, 'reachable_time': 22005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301215, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.607 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee477276-72d5-450b-a99c-bed1a93e4a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.667 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[42876604-e3b6-419f-a216-ded0057e6e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.668 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.669 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.669 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap962339a8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005466030 NetworkManager[44960]: <info>  [1759410327.6714] manager: (tap962339a8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct  2 09:05:27 np0005466030 kernel: tap962339a8-a0: entered promiscuous mode
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap962339a8-a0, col_values=(('external_ids', {'iface-id': '95f6c57c-e568-4ed7-aa6a-02671a012e41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:27Z|00763|binding|INFO|Releasing lport 95f6c57c-e568-4ed7-aa6a-02671a012e41 from this chassis (sb_readonly=0)
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.691 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.692 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0a7a76-b90b-4778-908f-194546dfc566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.693 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-962339a8-ad45-401e-ae58-50cd40858566
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 962339a8-ad45-401e-ae58-50cd40858566
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.695 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'env', 'PROCESS_TAG=haproxy-962339a8-ad45-401e-ae58-50cd40858566', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/962339a8-ad45-401e-ae58-50cd40858566.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:05:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.791 2 DEBUG nova.compute.manager [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.791 2 DEBUG oslo_concurrency.lockutils [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.792 2 DEBUG oslo_concurrency.lockutils [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.792 2 DEBUG oslo_concurrency.lockutils [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:27 np0005466030 nova_compute[230518]: 2025-10-02 13:05:27.792 2 DEBUG nova.compute.manager [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Processing event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:05:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.902 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:28 np0005466030 kernel: tap55d951c1-1c (unregistering): left promiscuous mode
Oct  2 09:05:28 np0005466030 NetworkManager[44960]: <info>  [1759410328.0300] device (tap55d951c1-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:28Z|00764|binding|INFO|Releasing lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 from this chassis (sb_readonly=0)
Oct  2 09:05:28 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:28Z|00765|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 down in Southbound
Oct  2 09:05:28 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:28Z|00766|binding|INFO|Removing iface tap55d951c1-1c ovn-installed in OVS
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.049 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466030 podman[301282]: 2025-10-02 13:05:28.07100518 +0000 UTC m=+0.061659627 container create ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:05:28 np0005466030 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Oct  2 09:05:28 np0005466030 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b5.scope: Consumed 16.912s CPU time.
Oct  2 09:05:28 np0005466030 systemd[1]: Started libpod-conmon-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2.scope.
Oct  2 09:05:28 np0005466030 systemd-machined[188247]: Machine qemu-86-instance-000000b5 terminated.
Oct  2 09:05:28 np0005466030 podman[301282]: 2025-10-02 13:05:28.041562656 +0000 UTC m=+0.032217133 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:05:28 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:05:28 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7857d0ba775bc4a2874c70f85893a680df6dee09c2866dc2675cedacec2fde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:05:28 np0005466030 podman[301282]: 2025-10-02 13:05:28.151419439 +0000 UTC m=+0.142073886 container init ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct  2 09:05:28 np0005466030 podman[301282]: 2025-10-02 13:05:28.157147456 +0000 UTC m=+0.147801903 container start ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : New worker (301313) forked
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : Loading success.
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.222 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 unbound from our chassis#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.223 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b07d0c6a-5988-4afb-b4ba-d4048578b224, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.224 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e076f151-9085-4ba2-a9cd-d241c9d1df6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.225 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace which is not needed anymore#033[00m
Oct  2 09:05:28 np0005466030 NetworkManager[44960]: <info>  [1759410328.2704] manager: (tap55d951c1-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : haproxy version is 2.8.14-c23fe91
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : path to executable is /usr/sbin/haproxy
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [WARNING]  (299887) : Exiting Master process...
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [WARNING]  (299887) : Exiting Master process...
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [ALERT]    (299887) : Current worker (299889) exited with code 143 (Terminated)
Oct  2 09:05:28 np0005466030 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [WARNING]  (299887) : All workers exited. Exiting... (0)
Oct  2 09:05:28 np0005466030 systemd[1]: libpod-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2.scope: Deactivated successfully.
Oct  2 09:05:28 np0005466030 podman[301348]: 2025-10-02 13:05:28.362315781 +0000 UTC m=+0.049147587 container died 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2-userdata-shm.mount: Deactivated successfully.
Oct  2 09:05:28 np0005466030 systemd[1]: var-lib-containers-storage-overlay-afce76e5b3e048a73313ef5d27cfa735799b4e55d4b12602e339b371cc625d3b-merged.mount: Deactivated successfully.
Oct  2 09:05:28 np0005466030 podman[301348]: 2025-10-02 13:05:28.402465539 +0000 UTC m=+0.089297345 container cleanup 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:05:28 np0005466030 systemd[1]: libpod-conmon-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2.scope: Deactivated successfully.
Oct  2 09:05:28 np0005466030 podman[301379]: 2025-10-02 13:05:28.467215471 +0000 UTC m=+0.044348029 container remove 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.472 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ecaa2383-38c3-40ca-8794-c1859c85a37e]: (4, ('Thu Oct  2 01:05:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2)\n90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2\nThu Oct  2 01:05:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2)\n90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.474 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[205612de-adab-40b4-8519-32b631c520d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.477 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466030 kernel: tapb07d0c6a-50: left promiscuous mode
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.509 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9bca379-6cc6-4497-8b61-91a9fdeb5bc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG nova.compute.manager [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG oslo_concurrency.lockutils [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG oslo_concurrency.lockutils [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG oslo_concurrency.lockutils [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.520 2 DEBUG nova.compute.manager [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.520 2 WARNING nova.compute.manager [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state shelving.#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.531 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[868a4654-74f4-4175-b5b0-262ec228dd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.532 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[174d6d45-4ad4-4fee-824c-66d62ec0a0d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.552 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d53ccad-998c-4e12-abc7-901f85838963]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810516, 'reachable_time': 17181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301397, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 systemd[1]: run-netns-ovnmeta\x2db07d0c6a\x2d5988\x2d4afb\x2db4ba\x2dd4048578b224.mount: Deactivated successfully.
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.555 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:05:28 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.555 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9a0974-109c-4e8f-9979-9a142470872c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.582 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410328.5817065, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.582 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Started (Lifecycle Event)#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.587 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.591 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.595 2 INFO nova.virt.libvirt.driver [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance spawned successfully.#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.596 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.618 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.627 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.628 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance destroyed successfully.#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.629 2 DEBUG nova.objects.instance [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'numa_topology' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.638 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.643 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.644 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.644 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.645 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.646 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.647 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.729 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.730 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410328.5818174, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.730 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.766 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.771 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410328.5897799, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.772 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.817 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.825 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.851 2 INFO nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 8.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.851 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.900 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:05:28 np0005466030 nova_compute[230518]: 2025-10-02 13:05:28.969 2 INFO nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 9.69 seconds to build instance.#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.004 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.422 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Beginning cold snapshot process#033[00m
Oct  2 09:05:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:29.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.626 2 DEBUG nova.virt.libvirt.imagebackend [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 09:05:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.928 2 DEBUG nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.929 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.929 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.930 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.930 2 DEBUG nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:29 np0005466030 nova_compute[230518]: 2025-10-02 13:05:29.930 2 WARNING nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.086 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.087 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.147 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] creating snapshot(6e91a9f1aeaf49a7b3f111e2432ffa25) on rbd image(ea034622-0a48-4de6-8d68-0f2240b54214_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.721 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.721 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.721 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.722 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.722 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:30 np0005466030 nova_compute[230518]: 2025-10-02 13:05:30.722 2 WARNING nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 09:05:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Oct  2 09:05:31 np0005466030 nova_compute[230518]: 2025-10-02 13:05:31.324 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] cloning vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk@6e91a9f1aeaf49a7b3f111e2432ffa25 to images/3560df73-c585-4179-87ca-fb0ca65743ee clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 09:05:31 np0005466030 nova_compute[230518]: 2025-10-02 13:05:31.513 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] flattening images/3560df73-c585-4179-87ca-fb0ca65743ee flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 09:05:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:31.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:31.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:32 np0005466030 nova_compute[230518]: 2025-10-02 13:05:32.160 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] removing snapshot(6e91a9f1aeaf49a7b3f111e2432ffa25) on rbd image(ea034622-0a48-4de6-8d68-0f2240b54214_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 09:05:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Oct  2 09:05:32 np0005466030 nova_compute[230518]: 2025-10-02 13:05:32.364 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] creating snapshot(snap) on rbd image(3560df73-c585-4179-87ca-fb0ca65743ee) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:05:32 np0005466030 nova_compute[230518]: 2025-10-02 13:05:32.800 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:32 np0005466030 nova_compute[230518]: 2025-10-02 13:05:32.835 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:32 np0005466030 nova_compute[230518]: 2025-10-02 13:05:32.835 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:05:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Oct  2 09:05:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:33.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:33.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:34 np0005466030 nova_compute[230518]: 2025-10-02 13:05:34.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:35 np0005466030 nova_compute[230518]: 2025-10-02 13:05:35.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:05:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:35.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:05:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:35 np0005466030 nova_compute[230518]: 2025-10-02 13:05:35.923 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Snapshot image upload complete#033[00m
Oct  2 09:05:35 np0005466030 nova_compute[230518]: 2025-10-02 13:05:35.924 2 DEBUG nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:36 np0005466030 nova_compute[230518]: 2025-10-02 13:05:36.002 2 INFO nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Shelve offloading#033[00m
Oct  2 09:05:36 np0005466030 nova_compute[230518]: 2025-10-02 13:05:36.009 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance destroyed successfully.#033[00m
Oct  2 09:05:36 np0005466030 nova_compute[230518]: 2025-10-02 13:05:36.011 2 DEBUG nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:36 np0005466030 nova_compute[230518]: 2025-10-02 13:05:36.014 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:36 np0005466030 nova_compute[230518]: 2025-10-02 13:05:36.014 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:36 np0005466030 nova_compute[230518]: 2025-10-02 13:05:36.014 2 DEBUG nova.network.neutron [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:05:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:37.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:39 np0005466030 nova_compute[230518]: 2025-10-02 13:05:39.258 2 DEBUG nova.network.neutron [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:39 np0005466030 nova_compute[230518]: 2025-10-02 13:05:39.322 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Oct  2 09:05:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:39.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:05:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 58K writes, 229K keys, 58K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.05 MB/s#012Cumulative WAL: 58K writes, 21K syncs, 2.72 writes per sync, written: 0.22 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9446 writes, 37K keys, 9446 commit groups, 1.0 writes per commit group, ingest: 38.79 MB, 0.06 MB/s#012Interval WAL: 9446 writes, 3659 syncs, 2.58 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:05:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:39.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:39 np0005466030 nova_compute[230518]: 2025-10-02 13:05:39.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:40 np0005466030 nova_compute[230518]: 2025-10-02 13:05:40.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.801 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance destroyed successfully.#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.801 2 DEBUG nova.objects.instance [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'resources' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.822 2 DEBUG nova.virt.libvirt.vif [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member',shelved_at='2025-10-02T13:05:35.924546',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='3560df73-c585-4179-87ca-fb0ca65743ee'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:29Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.822 2 DEBUG nova.network.os_vif_util [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.823 2 DEBUG nova.network.os_vif_util [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.823 2 DEBUG os_vif [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d951c1-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.831 2 INFO os_vif [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c')#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.997 2 DEBUG nova.compute.manager [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.997 2 DEBUG nova.compute.manager [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing instance network info cache due to event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.997 2 DEBUG oslo_concurrency.lockutils [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.998 2 DEBUG oslo_concurrency.lockutils [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:05:41 np0005466030 nova_compute[230518]: 2025-10-02 13:05:41.998 2 DEBUG nova.network.neutron [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:05:42 np0005466030 nova_compute[230518]: 2025-10-02 13:05:42.929 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deleting instance files /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214_del#033[00m
Oct  2 09:05:42 np0005466030 nova_compute[230518]: 2025-10-02 13:05:42.930 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deletion of /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214_del complete#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.093 2 INFO nova.scheduler.client.report [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Deleted allocations for instance ea034622-0a48-4de6-8d68-0f2240b54214#033[00m
Oct  2 09:05:43 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:43Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:3f:1d 10.100.0.12
Oct  2 09:05:43 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:43Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:3f:1d 10.100.0.12
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.166 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.166 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.288 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410328.2867503, ea034622-0a48-4de6-8d68-0f2240b54214 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.288 2 INFO nova.compute.manager [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.315 2 DEBUG oslo_concurrency.processutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.353 2 DEBUG nova.compute.manager [None req-ede792f0-4128-4348-92e1-d3625ea500f8 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:05:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4083777070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.761 2 DEBUG oslo_concurrency.processutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:43.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.768 2 DEBUG nova.compute.provider_tree [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.810 2 DEBUG nova.scheduler.client.report [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.854 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:43 np0005466030 nova_compute[230518]: 2025-10-02 13:05:43.935 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:44 np0005466030 nova_compute[230518]: 2025-10-02 13:05:44.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:45.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:45 np0005466030 nova_compute[230518]: 2025-10-02 13:05:45.632 2 DEBUG nova.network.neutron [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updated VIF entry in instance network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:05:45 np0005466030 nova_compute[230518]: 2025-10-02 13:05:45.632 2 DEBUG nova.network.neutron [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": null, "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap55d951c1-1c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:45 np0005466030 nova_compute[230518]: 2025-10-02 13:05:45.684 2 DEBUG oslo_concurrency.lockutils [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:05:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:46 np0005466030 nova_compute[230518]: 2025-10-02 13:05:46.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:47 np0005466030 podman[301585]: 2025-10-02 13:05:47.809926221 +0000 UTC m=+0.059202701 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:47 np0005466030 podman[301584]: 2025-10-02 13:05:47.836626471 +0000 UTC m=+0.088493442 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:05:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:49 np0005466030 nova_compute[230518]: 2025-10-02 13:05:49.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:51.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:51 np0005466030 nova_compute[230518]: 2025-10-02 13:05:51.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.664 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.664 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.665 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.665 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.665 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.666 2 INFO nova.compute.manager [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Terminating instance#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.667 2 DEBUG nova.compute.manager [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:05:52 np0005466030 kernel: tap5b77d75e-cf (unregistering): left promiscuous mode
Oct  2 09:05:52 np0005466030 NetworkManager[44960]: <info>  [1759410352.7441] device (tap5b77d75e-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00767|binding|INFO|Releasing lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 from this chassis (sb_readonly=0)
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00768|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 down in Southbound
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00769|binding|INFO|Removing iface tap5b77d75e-cf ovn-installed in OVS
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.759 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.760 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis#033[00m
Oct  2 09:05:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.761 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.762 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d5a1a1-fd1f-4231-b1ee-6de233cc800a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.763 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace which is not needed anymore#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Oct  2 09:05:52 np0005466030 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bb.scope: Consumed 13.718s CPU time.
Oct  2 09:05:52 np0005466030 systemd-machined[188247]: Machine qemu-88-instance-000000bb terminated.
Oct  2 09:05:52 np0005466030 kernel: tap5b77d75e-cf: entered promiscuous mode
Oct  2 09:05:52 np0005466030 systemd-udevd[301634]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:05:52 np0005466030 NetworkManager[44960]: <info>  [1759410352.8846] manager: (tap5b77d75e-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00770|binding|INFO|Claiming lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for this chassis.
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00771|binding|INFO|5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3: Claiming fa:16:3e:ee:3f:1d 10.100.0.12
Oct  2 09:05:52 np0005466030 kernel: tap5b77d75e-cf (unregistering): left promiscuous mode
Oct  2 09:05:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.899 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00772|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 ovn-installed in OVS
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00773|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 up in Southbound
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00774|binding|INFO|Releasing lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 from this chassis (sb_readonly=1)
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00775|binding|INFO|Removing iface tap5b77d75e-cf ovn-installed in OVS
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00776|if_status|INFO|Dropped 4 log messages in last 1464 seconds (most recently, 1464 seconds ago) due to excessive rate
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00777|if_status|INFO|Not setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 down as sb is readonly
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00778|binding|INFO|Releasing lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 from this chassis (sb_readonly=1)
Oct  2 09:05:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:05:52Z|00779|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 down in Southbound
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.915 2 INFO nova.virt.libvirt.driver [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance destroyed successfully.#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.916 2 DEBUG nova.objects.instance [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'resources' on Instance uuid f320bcaa-1dfe-4d91-bd4a-05ed389402a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.925 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.939 2 DEBUG nova.virt.libvirt.vif [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1687487968',display_name='tempest-TestServerMultinode-server-1687487968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1687487968',id=187,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:05:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-6z94ee26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:28Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=f320bcaa-1dfe-4d91-bd4a-05ed389402a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.940 2 DEBUG nova.network.os_vif_util [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:05:52 np0005466030 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : haproxy version is 2.8.14-c23fe91
Oct  2 09:05:52 np0005466030 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : path to executable is /usr/sbin/haproxy
Oct  2 09:05:52 np0005466030 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [WARNING]  (301310) : Exiting Master process...
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.940 2 DEBUG nova.network.os_vif_util [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.941 2 DEBUG os_vif [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:05:52 np0005466030 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [ALERT]    (301310) : Current worker (301313) exited with code 143 (Terminated)
Oct  2 09:05:52 np0005466030 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [WARNING]  (301310) : All workers exited. Exiting... (0)
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 systemd[1]: libpod-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2.scope: Deactivated successfully.
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b77d75e-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:52 np0005466030 nova_compute[230518]: 2025-10-02 13:05:52.949 2 INFO os_vif [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf')#033[00m
Oct  2 09:05:52 np0005466030 podman[301653]: 2025-10-02 13:05:52.951369182 +0000 UTC m=+0.099456161 container died ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:05:53 np0005466030 nova_compute[230518]: 2025-10-02 13:05:53.206 2 DEBUG nova.compute.manager [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:53 np0005466030 nova_compute[230518]: 2025-10-02 13:05:53.207 2 DEBUG oslo_concurrency.lockutils [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:53 np0005466030 nova_compute[230518]: 2025-10-02 13:05:53.208 2 DEBUG oslo_concurrency.lockutils [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:53 np0005466030 nova_compute[230518]: 2025-10-02 13:05:53.209 2 DEBUG oslo_concurrency.lockutils [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:53 np0005466030 nova_compute[230518]: 2025-10-02 13:05:53.210 2 DEBUG nova.compute.manager [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:53 np0005466030 nova_compute[230518]: 2025-10-02 13:05:53.211 2 DEBUG nova.compute.manager [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:05:53 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2-userdata-shm.mount: Deactivated successfully.
Oct  2 09:05:53 np0005466030 systemd[1]: var-lib-containers-storage-overlay-1c7857d0ba775bc4a2874c70f85893a680df6dee09c2866dc2675cedacec2fde-merged.mount: Deactivated successfully.
Oct  2 09:05:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:53 np0005466030 podman[301653]: 2025-10-02 13:05:53.737925741 +0000 UTC m=+0.886012740 container cleanup ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:05:53 np0005466030 systemd[1]: libpod-conmon-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2.scope: Deactivated successfully.
Oct  2 09:05:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:53.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:54 np0005466030 podman[301706]: 2025-10-02 13:05:54.289440126 +0000 UTC m=+0.519280455 container remove ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.295 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c479f57-9960-45e0-98e7-0b93b925a654]: (4, ('Thu Oct  2 01:05:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2)\nae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2\nThu Oct  2 01:05:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2)\nae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.297 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0213bb-d1a7-420e-ad60-26bdb90681f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.298 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:54 np0005466030 kernel: tap962339a8-a0: left promiscuous mode
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2808f2-33ce-42cc-a360-3205ea90da9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.347 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49c9eb82-b444-4137-b4d8-873d2560c43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0b4fd0-ce19-48eb-b0c6-da5fca0fd1b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.377 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[402a32c3-0027-4c79-a365-6b911ffbc431]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819099, 'reachable_time': 19728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301722, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 systemd[1]: run-netns-ovnmeta\x2d962339a8\x2dad45\x2d401e\x2dae58\x2d50cd40858566.mount: Deactivated successfully.
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.380 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.380 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[bab1575c-c8b8-403d-a7ac-3449517dcc0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.384 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.385 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[80525358-b6b0-49bd-b888-9370cd93b678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.387 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.389 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:05:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.389 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d711d0-486a-48f6-bcfd-f2be52ba64bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:05:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.659 2 INFO nova.virt.libvirt.driver [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deleting instance files /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_del#033[00m
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.660 2 INFO nova.virt.libvirt.driver [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deletion of /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_del complete#033[00m
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.774 2 INFO nova.compute.manager [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 2.11 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.775 2 DEBUG oslo.service.loopingcall [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.775 2 DEBUG nova.compute.manager [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.776 2 DEBUG nova.network.neutron [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:05:54 np0005466030 nova_compute[230518]: 2025-10-02 13:05:54.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.122 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.123 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.123 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.123 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.124 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.124 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.124 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.124 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.125 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.125 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.125 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.125 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.126 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.127 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.127 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.127 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.128 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.128 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.128 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.128 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.129 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.129 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.129 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.130 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.130 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.131 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.131 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.131 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.132 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:05:56 np0005466030 nova_compute[230518]: 2025-10-02 13:05:56.132 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:05:57 np0005466030 nova_compute[230518]: 2025-10-02 13:05:57.268 2 DEBUG nova.network.neutron [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:05:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:57 np0005466030 nova_compute[230518]: 2025-10-02 13:05:57.770 2 INFO nova.compute.manager [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 2.99 seconds to deallocate network for instance.#033[00m
Oct  2 09:05:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:05:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:57.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:05:57 np0005466030 podman[301724]: 2025-10-02 13:05:57.807140164 +0000 UTC m=+0.055798825 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:57 np0005466030 podman[301725]: 2025-10-02 13:05:57.808349032 +0000 UTC m=+0.055859467 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:05:57 np0005466030 nova_compute[230518]: 2025-10-02 13:05:57.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:05:58 np0005466030 nova_compute[230518]: 2025-10-02 13:05:58.126 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:05:58 np0005466030 nova_compute[230518]: 2025-10-02 13:05:58.126 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:05:58 np0005466030 nova_compute[230518]: 2025-10-02 13:05:58.294 2 DEBUG oslo_concurrency.processutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:05:58 np0005466030 nova_compute[230518]: 2025-10-02 13:05:58.349 2 DEBUG nova.compute.manager [req-cc4ad190-bb03-4abb-872b-a2c84be357ab req-2f67c1ae-91b7-4dfb-9b66-27bca9a577a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-deleted-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:05:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:05:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2926308603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:05:58 np0005466030 nova_compute[230518]: 2025-10-02 13:05:58.748 2 DEBUG oslo_concurrency.processutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:05:58 np0005466030 nova_compute[230518]: 2025-10-02 13:05:58.754 2 DEBUG nova.compute.provider_tree [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:05:58 np0005466030 nova_compute[230518]: 2025-10-02 13:05:58.794 2 DEBUG nova.scheduler.client.report [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:05:59 np0005466030 nova_compute[230518]: 2025-10-02 13:05:59.086 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:59 np0005466030 nova_compute[230518]: 2025-10-02 13:05:59.219 2 INFO nova.scheduler.client.report [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Deleted allocations for instance f320bcaa-1dfe-4d91-bd4a-05ed389402a7#033[00m
Oct  2 09:05:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:05:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:59.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:59 np0005466030 nova_compute[230518]: 2025-10-02 13:05:59.590 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:05:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:05:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:05:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:59.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:05:59 np0005466030 nova_compute[230518]: 2025-10-02 13:05:59.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:06:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:06:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:02 np0005466030 nova_compute[230518]: 2025-10-02 13:06:02.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:03.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:03.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:04 np0005466030 nova_compute[230518]: 2025-10-02 13:06:04.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:04 np0005466030 nova_compute[230518]: 2025-10-02 13:06:04.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:06:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:05.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:06:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:06:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:06:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:06:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:07 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:06:07 np0005466030 nova_compute[230518]: 2025-10-02 13:06:07.914 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410352.9119, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:07 np0005466030 nova_compute[230518]: 2025-10-02 13:06:07.914 2 INFO nova.compute.manager [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:06:07 np0005466030 nova_compute[230518]: 2025-10-02 13:06:07.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:08 np0005466030 nova_compute[230518]: 2025-10-02 13:06:08.043 2 DEBUG nova.compute.manager [None req-2785e55c-ee8e-4f2f-9946-7f6813ccf19a - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:09.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:09.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:09 np0005466030 nova_compute[230518]: 2025-10-02 13:06:09.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:06:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 13K writes, 67K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1568 writes, 8159 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 15.81 MB, 0.03 MB/s#012Interval WAL: 1569 writes, 1569 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     56.3      1.45              0.24        42    0.035       0      0       0.0       0.0#012  L6      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    116.1     98.9      4.14              1.22        41    0.101    279K    22K       0.0       0.0#012 Sum      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     86.0     87.8      5.60              1.46        83    0.067    279K    22K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7     98.6     98.6      0.92              0.31        14    0.066     64K   3655       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    116.1     98.9      4.14              1.22        41    0.101    279K    22K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     56.4      1.45              0.24        41    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.080, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.48 GB write, 0.10 MB/s write, 0.47 GB read, 0.10 MB/s read, 5.6 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 52.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000332 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3003,49.95 MB,16.4312%) FilterBlock(83,783.11 KB,0.251564%) IndexBlock(83,1.30 MB,0.427567%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:06:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:11.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:11.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:12 np0005466030 nova_compute[230518]: 2025-10-02 13:06:12.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:13.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:13 np0005466030 nova_compute[230518]: 2025-10-02 13:06:13.693 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:13 np0005466030 nova_compute[230518]: 2025-10-02 13:06:13.694 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:13.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:13 np0005466030 nova_compute[230518]: 2025-10-02 13:06:13.828 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:06:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Oct  2 09:06:14 np0005466030 nova_compute[230518]: 2025-10-02 13:06:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:14 np0005466030 nova_compute[230518]: 2025-10-02 13:06:14.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:15 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.262 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.263 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.275 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.275 2 INFO nova.compute.claims [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:06:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:15.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.638 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.665 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.665 2 DEBUG nova.compute.provider_tree [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.681 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.705 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:06:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:15.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:15 np0005466030 nova_compute[230518]: 2025-10-02 13:06:15.895 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4164908810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:16 np0005466030 nova_compute[230518]: 2025-10-02 13:06:16.339 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:16 np0005466030 nova_compute[230518]: 2025-10-02 13:06:16.346 2 DEBUG nova.compute.provider_tree [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:06:16 np0005466030 nova_compute[230518]: 2025-10-02 13:06:16.375 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:06:16 np0005466030 nova_compute[230518]: 2025-10-02 13:06:16.440 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:16 np0005466030 nova_compute[230518]: 2025-10-02 13:06:16.441 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:06:17 np0005466030 nova_compute[230518]: 2025-10-02 13:06:17.282 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:06:17 np0005466030 nova_compute[230518]: 2025-10-02 13:06:17.283 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:06:17 np0005466030 nova_compute[230518]: 2025-10-02 13:06:17.589 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:06:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:17.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:17.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:17 np0005466030 nova_compute[230518]: 2025-10-02 13:06:17.865 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:06:17 np0005466030 nova_compute[230518]: 2025-10-02 13:06:17.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.053 2 DEBUG nova.policy [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.215 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.236 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.237 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.238 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Creating image(s)#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.263 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.290 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.316 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.320 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.352 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.353 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.353 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.354 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.354 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.391 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.392 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.392 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.393 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.423 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.427 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:18 np0005466030 podman[302104]: 2025-10-02 13:06:18.800013274 +0000 UTC m=+0.048964622 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 09:06:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1150856767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.854 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:18 np0005466030 podman[302103]: 2025-10-02 13:06:18.858554773 +0000 UTC m=+0.108378038 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:06:18 np0005466030 nova_compute[230518]: 2025-10-02 13:06:18.974 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.041 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.109 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.110 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.921974182128906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.110 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.161 2 DEBUG nova.objects.instance [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Oct  2 09:06:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:19.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.807 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.808 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Ensure instance console log exists: /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.808 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.809 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.809 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:19.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:19 np0005466030 nova_compute[230518]: 2025-10-02 13:06:19.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.192 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance aaa891aa-5701-4706-b86f-6216b8cf4c6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.193 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.193 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.252 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:06:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/85858181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.679 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.685 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.710 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.754 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:06:20 np0005466030 nova_compute[230518]: 2025-10-02 13:06:20.754 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:21 np0005466030 nova_compute[230518]: 2025-10-02 13:06:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:21 np0005466030 nova_compute[230518]: 2025-10-02 13:06:21.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:06:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:21.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:22 np0005466030 nova_compute[230518]: 2025-10-02 13:06:22.299 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Successfully created port: fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:06:22 np0005466030 nova_compute[230518]: 2025-10-02 13:06:22.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:23.074 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:23 np0005466030 nova_compute[230518]: 2025-10-02 13:06:23.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:23.075 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:06:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:23.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:24 np0005466030 nova_compute[230518]: 2025-10-02 13:06:24.587 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:24 np0005466030 nova_compute[230518]: 2025-10-02 13:06:24.587 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:24 np0005466030 nova_compute[230518]: 2025-10-02 13:06:24.588 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:06:24 np0005466030 nova_compute[230518]: 2025-10-02 13:06:24.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:25 np0005466030 nova_compute[230518]: 2025-10-02 13:06:25.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:25.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:26.077 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.262 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Successfully updated port: fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.323 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.323 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.323 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.893 2 DEBUG nova.compute.manager [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.894 2 DEBUG nova.compute.manager [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing instance network info cache due to event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:06:26 np0005466030 nova_compute[230518]: 2025-10-02 13:06:26.895 2 DEBUG oslo_concurrency.lockutils [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:06:27 np0005466030 nova_compute[230518]: 2025-10-02 13:06:27.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:27 np0005466030 nova_compute[230518]: 2025-10-02 13:06:27.212 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:06:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:27.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:27 np0005466030 nova_compute[230518]: 2025-10-02 13:06:27.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:28 np0005466030 nova_compute[230518]: 2025-10-02 13:06:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:28 np0005466030 podman[302243]: 2025-10-02 13:06:28.804105143 +0000 UTC m=+0.060305475 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  2 09:06:28 np0005466030 podman[302244]: 2025-10-02 13:06:28.819974465 +0000 UTC m=+0.065961490 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.048 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:29.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:29.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.926 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.926 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance network_info: |[{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.927 2 DEBUG oslo_concurrency.lockutils [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.927 2 DEBUG nova.network.neutron [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.931 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start _get_guest_xml network_info=[{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.936 2 WARNING nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.943 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.944 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.948 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.949 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.952 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.952 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:06:29 np0005466030 nova_compute[230518]: 2025-10-02 13:06:29.954 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:06:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3502773686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:06:30 np0005466030 nova_compute[230518]: 2025-10-02 13:06:30.381 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:30 np0005466030 nova_compute[230518]: 2025-10-02 13:06:30.428 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:30 np0005466030 nova_compute[230518]: 2025-10-02 13:06:30.432 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:06:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494029457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.068 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.071 2 DEBUG nova.virt.libvirt.vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=191,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-aaf9gv0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:18Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=aaa891aa-5701-4706-b86f-6216b8cf4c6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.072 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.073 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.076 2 DEBUG nova.objects.instance [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.107 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <uuid>aaa891aa-5701-4706-b86f-6216b8cf4c6d</uuid>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <name>instance-000000bf</name>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426</nova:name>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:06:29</nova:creationTime>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <nova:port uuid="fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <entry name="serial">aaa891aa-5701-4706-b86f-6216b8cf4c6d</entry>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <entry name="uuid">aaa891aa-5701-4706-b86f-6216b8cf4c6d</entry>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:40:11:63"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <target dev="tapfd38dd09-d0"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/console.log" append="off"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:06:31 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:06:31 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:06:31 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:06:31 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.110 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Preparing to wait for external event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.110 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.111 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.111 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.113 2 DEBUG nova.virt.libvirt.vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=191,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-aaf9gv0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:18Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=aaa891aa-5701-4706-b86f-6216b8cf4c6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.113 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.114 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.115 2 DEBUG os_vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd38dd09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd38dd09-d0, col_values=(('external_ids', {'iface-id': 'fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:11:63', 'vm-uuid': 'aaa891aa-5701-4706-b86f-6216b8cf4c6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466030 NetworkManager[44960]: <info>  [1759410391.1265] manager: (tapfd38dd09-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.134 2 INFO os_vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0')#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.211 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.211 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.211 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:40:11:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.212 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Using config drive#033[00m
Oct  2 09:06:31 np0005466030 nova_compute[230518]: 2025-10-02 13:06:31.241 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:31.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.101 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.507 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Creating config drive at /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.513 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2d2lhl9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.656 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2d2lhl9" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.708 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.714 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.947 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:06:32 np0005466030 nova_compute[230518]: 2025-10-02 13:06:32.949 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deleting local config drive /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config because it was imported into RBD.#033[00m
Oct  2 09:06:33 np0005466030 kernel: tapfd38dd09-d0: entered promiscuous mode
Oct  2 09:06:33 np0005466030 NetworkManager[44960]: <info>  [1759410393.0182] manager: (tapfd38dd09-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct  2 09:06:33 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:33Z|00780|binding|INFO|Claiming lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for this chassis.
Oct  2 09:06:33 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:33Z|00781|binding|INFO|fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769: Claiming fa:16:3e:40:11:63 10.100.0.10
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.045 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:11:63 10.100.0.10'], port_security=['fa:16:3e:40:11:63 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aaa891aa-5701-4706-b86f-6216b8cf4c6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c3c-d440-4dcd-8562-bb1990277f07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c17ab1a8-d19d-4728-b5e8-5f12f979e5d3 f03b5452-680c-4498-87d9-e083abe84e44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4012d9db-84cc-44d4-8e0c-304e52c3ea33, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.046 138374 INFO neutron.agent.ovn.metadata.agent [-] Port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 in datapath 0de30c3c-d440-4dcd-8562-bb1990277f07 bound to our chassis#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.048 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de30c3c-d440-4dcd-8562-bb1990277f07#033[00m
Oct  2 09:06:33 np0005466030 systemd-udevd[302416]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:06:33 np0005466030 NetworkManager[44960]: <info>  [1759410393.0644] device (tapfd38dd09-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:06:33 np0005466030 NetworkManager[44960]: <info>  [1759410393.0656] device (tapfd38dd09-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.065 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea17c51e-3638-4988-823f-20164cb87795]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.066 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0de30c3c-d1 in ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.068 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0de30c3c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0204542c-2aa2-4870-8ed5-f0455c5d7e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 systemd-machined[188247]: New machine qemu-89-instance-000000bf.
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af1c7d94-63aa-4a2c-b1ac-ce4099172f8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.081 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[27ae6a47-31cb-4ede-8601-a9c04232c908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[73c8a49b-3169-459e-8e2a-d445d1c1a4da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:33 np0005466030 systemd[1]: Started Virtual Machine qemu-89-instance-000000bf.
Oct  2 09:06:33 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:33Z|00782|binding|INFO|Setting lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 ovn-installed in OVS
Oct  2 09:06:33 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:33Z|00783|binding|INFO|Setting lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 up in Southbound
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.147 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1b5d19-7def-4a23-83d3-cc93f2e28e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.152 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc98b39f-c8b9-4260-ac4c-1563f53a9598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 NetworkManager[44960]: <info>  [1759410393.1547] manager: (tap0de30c3c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/360)
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.185 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8d5ef9-92af-446e-a455-1c1dc6b3111b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.188 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1dee8808-0c78-41e3-a924-a6faa841d083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 NetworkManager[44960]: <info>  [1759410393.2084] device (tap0de30c3c-d0): carrier: link connected
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.213 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfe93f2-967e-4584-b417-8ea0d05b9d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.228 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e64f5f87-b7db-4ed9-893e-106bd3028bd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c3c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:84:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825676, 'reachable_time': 36182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302450, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.241 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a79c32e1-8aad-46b5-9185-859de9e88667]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:84cf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 825676, 'tstamp': 825676}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302451, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.254 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[747c24f4-e094-4c14-bfbd-b315be7e949b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c3c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:84:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825676, 'reachable_time': 36182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302452, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.282 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[107d157f-3e9c-492c-b908-3d1978fa1432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.334 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba46c963-cbbe-43ad-a2a6-b01331e1d657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.335 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c3c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.336 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.336 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de30c3c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:33 np0005466030 NetworkManager[44960]: <info>  [1759410393.3432] manager: (tap0de30c3c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Oct  2 09:06:33 np0005466030 kernel: tap0de30c3c-d0: entered promiscuous mode
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.347 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de30c3c-d0, col_values=(('external_ids', {'iface-id': 'c46b8ee2-3741-41ad-a412-6a121aeea4c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:06:33 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:33Z|00784|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.368 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d54aaa2-e172-4047-963b-6a0cb26db894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.370 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-0de30c3c-d440-4dcd-8562-bb1990277f07
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 0de30c3c-d440-4dcd-8562-bb1990277f07
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:06:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.370 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'env', 'PROCESS_TAG=haproxy-0de30c3c-d440-4dcd-8562-bb1990277f07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0de30c3c-d440-4dcd-8562-bb1990277f07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:06:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:33.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:33 np0005466030 podman[302526]: 2025-10-02 13:06:33.768603335 +0000 UTC m=+0.051837732 container create 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  2 09:06:33 np0005466030 systemd[1]: Started libpod-conmon-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708.scope.
Oct  2 09:06:33 np0005466030 podman[302526]: 2025-10-02 13:06:33.738302213 +0000 UTC m=+0.021536620 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:06:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:33.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:33 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:06:33 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35f09704061bc163420de8e40766fdfa5b7dc927df56828247008e161b62606/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:06:33 np0005466030 podman[302526]: 2025-10-02 13:06:33.886761676 +0000 UTC m=+0.169996133 container init 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  2 09:06:33 np0005466030 podman[302526]: 2025-10-02 13:06:33.89268664 +0000 UTC m=+0.175921037 container start 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.905 2 DEBUG nova.compute.manager [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.906 2 DEBUG oslo_concurrency.lockutils [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.906 2 DEBUG oslo_concurrency.lockutils [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.907 2 DEBUG oslo_concurrency.lockutils [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:33 np0005466030 nova_compute[230518]: 2025-10-02 13:06:33.908 2 DEBUG nova.compute.manager [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Processing event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:06:33 np0005466030 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : New worker (302547) forked
Oct  2 09:06:33 np0005466030 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : Loading success.
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.018 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.020 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410394.0175622, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.021 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Started (Lifecycle Event)#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.026 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.033 2 INFO nova.virt.libvirt.driver [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance spawned successfully.#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.034 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.037 2 DEBUG nova.network.neutron [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated VIF entry in instance network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.038 2 DEBUG nova.network.neutron [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.423 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.428 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.484 2 DEBUG oslo_concurrency.lockutils [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.491 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.492 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.493 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.493 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.494 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.494 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.622 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.623 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410394.0178325, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.624 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.957 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.961 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410394.025119, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:06:34 np0005466030 nova_compute[230518]: 2025-10-02 13:06:34.962 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:06:35 np0005466030 nova_compute[230518]: 2025-10-02 13:06:35.014 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:35 np0005466030 nova_compute[230518]: 2025-10-02 13:06:35.017 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:06:35 np0005466030 nova_compute[230518]: 2025-10-02 13:06:35.090 2 INFO nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 16.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:06:35 np0005466030 nova_compute[230518]: 2025-10-02 13:06:35.090 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:06:35 np0005466030 nova_compute[230518]: 2025-10-02 13:06:35.093 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:06:35 np0005466030 nova_compute[230518]: 2025-10-02 13:06:35.344 2 INFO nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 21.13 seconds to build instance.#033[00m
Oct  2 09:06:35 np0005466030 nova_compute[230518]: 2025-10-02 13:06:35.407 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:35.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:35.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:36 np0005466030 nova_compute[230518]: 2025-10-02 13:06:36.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:36 np0005466030 nova_compute[230518]: 2025-10-02 13:06:36.457 2 DEBUG nova.compute.manager [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:36 np0005466030 nova_compute[230518]: 2025-10-02 13:06:36.457 2 DEBUG oslo_concurrency.lockutils [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:36 np0005466030 nova_compute[230518]: 2025-10-02 13:06:36.458 2 DEBUG oslo_concurrency.lockutils [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:36 np0005466030 nova_compute[230518]: 2025-10-02 13:06:36.458 2 DEBUG oslo_concurrency.lockutils [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:36 np0005466030 nova_compute[230518]: 2025-10-02 13:06:36.458 2 DEBUG nova.compute.manager [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] No waiting events found dispatching network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:06:36 np0005466030 nova_compute[230518]: 2025-10-02 13:06:36.459 2 WARNING nova.compute.manager [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received unexpected event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:06:37 np0005466030 nova_compute[230518]: 2025-10-02 13:06:37.093 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:37.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:06:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:37.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:06:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:39 np0005466030 nova_compute[230518]: 2025-10-02 13:06:39.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:39 np0005466030 nova_compute[230518]: 2025-10-02 13:06:39.737 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  2 09:06:39 np0005466030 nova_compute[230518]: 2025-10-02 13:06:39.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:06:39 np0005466030 nova_compute[230518]: 2025-10-02 13:06:39.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:06:39 np0005466030 nova_compute[230518]: 2025-10-02 13:06:39.844 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:06:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:39 np0005466030 nova_compute[230518]: 2025-10-02 13:06:39.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:41 np0005466030 nova_compute[230518]: 2025-10-02 13:06:41.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:41 np0005466030 nova_compute[230518]: 2025-10-02 13:06:41.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:41 np0005466030 NetworkManager[44960]: <info>  [1759410401.5035] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct  2 09:06:41 np0005466030 NetworkManager[44960]: <info>  [1759410401.5050] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Oct  2 09:06:41 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:41Z|00785|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct  2 09:06:41 np0005466030 nova_compute[230518]: 2025-10-02 13:06:41.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:41 np0005466030 nova_compute[230518]: 2025-10-02 13:06:41.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:41.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:41.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:42 np0005466030 nova_compute[230518]: 2025-10-02 13:06:42.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:43.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:44 np0005466030 nova_compute[230518]: 2025-10-02 13:06:44.771 2 DEBUG nova.compute.manager [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:06:44 np0005466030 nova_compute[230518]: 2025-10-02 13:06:44.771 2 DEBUG nova.compute.manager [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing instance network info cache due to event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:06:44 np0005466030 nova_compute[230518]: 2025-10-02 13:06:44.771 2 DEBUG oslo_concurrency.lockutils [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:06:44 np0005466030 nova_compute[230518]: 2025-10-02 13:06:44.772 2 DEBUG oslo_concurrency.lockutils [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:06:44 np0005466030 nova_compute[230518]: 2025-10-02 13:06:44.772 2 DEBUG nova.network.neutron [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:06:44 np0005466030 nova_compute[230518]: 2025-10-02 13:06:44.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:45.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:46 np0005466030 nova_compute[230518]: 2025-10-02 13:06:46.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:49 np0005466030 podman[302559]: 2025-10-02 13:06:49.849229468 +0000 UTC m=+0.095075795 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:06:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:49.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:49 np0005466030 podman[302558]: 2025-10-02 13:06:49.884840134 +0000 UTC m=+0.122616960 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:06:49 np0005466030 nova_compute[230518]: 2025-10-02 13:06:49.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:51 np0005466030 nova_compute[230518]: 2025-10-02 13:06:51.118 2 DEBUG nova.network.neutron [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated VIF entry in instance network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:06:51 np0005466030 nova_compute[230518]: 2025-10-02 13:06:51.119 2 DEBUG nova.network.neutron [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:06:51 np0005466030 nova_compute[230518]: 2025-10-02 13:06:51.175 2 DEBUG oslo_concurrency.lockutils [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:06:51 np0005466030 nova_compute[230518]: 2025-10-02 13:06:51.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:51.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:51.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:52 np0005466030 nova_compute[230518]: 2025-10-02 13:06:52.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:53 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:53Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:11:63 10.100.0.10
Oct  2 09:06:53 np0005466030 ovn_controller[129257]: 2025-10-02T13:06:53Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:11:63 10.100.0.10
Oct  2 09:06:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:06:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:06:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:54 np0005466030 nova_compute[230518]: 2025-10-02 13:06:54.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:55.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:55.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:56 np0005466030 nova_compute[230518]: 2025-10-02 13:06:56.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:06:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:57 np0005466030 nova_compute[230518]: 2025-10-02 13:06:57.721 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:06:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:06:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:06:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:06:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:59.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:06:59 np0005466030 podman[302606]: 2025-10-02 13:06:59.813028424 +0000 UTC m=+0.060792439 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:06:59 np0005466030 podman[302605]: 2025-10-02 13:06:59.817251556 +0000 UTC m=+0.066629111 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:06:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:06:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:06:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:59.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:00 np0005466030 nova_compute[230518]: 2025-10-02 13:06:59.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:01 np0005466030 nova_compute[230518]: 2025-10-02 13:07:01.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:01.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:03 np0005466030 ovn_controller[129257]: 2025-10-02T13:07:03Z|00786|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct  2 09:07:03 np0005466030 nova_compute[230518]: 2025-10-02 13:07:03.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:05 np0005466030 nova_compute[230518]: 2025-10-02 13:07:05.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:07:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:07:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:07:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:07:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:05.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:05.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:06 np0005466030 nova_compute[230518]: 2025-10-02 13:07:06.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:07.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:08 np0005466030 nova_compute[230518]: 2025-10-02 13:07:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:08 np0005466030 nova_compute[230518]: 2025-10-02 13:07:08.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:07:08 np0005466030 nova_compute[230518]: 2025-10-02 13:07:08.074 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:07:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:07:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:07:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:07:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:07:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:09.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:09.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:10 np0005466030 nova_compute[230518]: 2025-10-02 13:07:10.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:07:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:07:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:07:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:07:11 np0005466030 nova_compute[230518]: 2025-10-02 13:07:11.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:11.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.353689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433353738, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2404, "num_deletes": 253, "total_data_size": 5714326, "memory_usage": 5788160, "flush_reason": "Manual Compaction"}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433405695, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3749607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65981, "largest_seqno": 68380, "table_properties": {"data_size": 3739781, "index_size": 6191, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20826, "raw_average_key_size": 20, "raw_value_size": 3719986, "raw_average_value_size": 3697, "num_data_blocks": 269, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410224, "oldest_key_time": 1759410224, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 52048 microseconds, and 10608 cpu microseconds.
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.405736) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3749607 bytes OK
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.405755) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.453605) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.453658) EVENT_LOG_v1 {"time_micros": 1759410433453647, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.453685) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5703593, prev total WAL file size 5703593, number of live WAL files 2.
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.456765) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3661KB)], [135(10MB)]
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433456853, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14457707, "oldest_snapshot_seqno": -1}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9083 keys, 12539598 bytes, temperature: kUnknown
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433634786, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12539598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12479845, "index_size": 35960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238680, "raw_average_key_size": 26, "raw_value_size": 12319408, "raw_average_value_size": 1356, "num_data_blocks": 1377, "num_entries": 9083, "num_filter_entries": 9083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.635010) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12539598 bytes
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.648481) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.2 rd, 70.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 9608, records dropped: 525 output_compression: NoCompression
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.648506) EVENT_LOG_v1 {"time_micros": 1759410433648496, "job": 86, "event": "compaction_finished", "compaction_time_micros": 177988, "compaction_time_cpu_micros": 53948, "output_level": 6, "num_output_files": 1, "total_output_size": 12539598, "num_input_records": 9608, "num_output_records": 9083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433649183, "job": 86, "event": "table_file_deletion", "file_number": 137}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433650927, "job": 86, "event": "table_file_deletion", "file_number": 135}
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.455950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:07:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:13.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:13.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:15 np0005466030 nova_compute[230518]: 2025-10-02 13:07:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:15.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:15.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:16 np0005466030 nova_compute[230518]: 2025-10-02 13:07:16.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:07:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:07:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:07:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:17.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.075 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.145 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.145 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.146 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.146 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.146 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512112853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.585 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.665 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.666 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:07:20 np0005466030 podman[302801]: 2025-10-02 13:07:20.704349465 +0000 UTC m=+0.066560259 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:07:20 np0005466030 podman[302800]: 2025-10-02 13:07:20.732071106 +0000 UTC m=+0.095681713 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.831 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.832 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4106MB free_disk=20.921966552734375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.832 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.832 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.955 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance aaa891aa-5701-4706-b86f-6216b8cf4c6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.956 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:07:20 np0005466030 nova_compute[230518]: 2025-10-02 13:07:20.956 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:07:21 np0005466030 nova_compute[230518]: 2025-10-02 13:07:21.044 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:21 np0005466030 nova_compute[230518]: 2025-10-02 13:07:21.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813418940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:21 np0005466030 nova_compute[230518]: 2025-10-02 13:07:21.469 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:21 np0005466030 nova_compute[230518]: 2025-10-02 13:07:21.474 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:07:21 np0005466030 nova_compute[230518]: 2025-10-02 13:07:21.520 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:07:21 np0005466030 nova_compute[230518]: 2025-10-02 13:07:21.558 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:07:21 np0005466030 nova_compute[230518]: 2025-10-02 13:07:21.559 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:21.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:21.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:07:22 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:07:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:23.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:23.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:25 np0005466030 nova_compute[230518]: 2025-10-02 13:07:25.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:25 np0005466030 nova_compute[230518]: 2025-10-02 13:07:25.536 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:25 np0005466030 nova_compute[230518]: 2025-10-02 13:07:25.537 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:07:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:25.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:25.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:25.963 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:25.963 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:25.963 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:26 np0005466030 nova_compute[230518]: 2025-10-02 13:07:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:26 np0005466030 ovn_controller[129257]: 2025-10-02T13:07:26Z|00787|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct  2 09:07:26 np0005466030 nova_compute[230518]: 2025-10-02 13:07:26.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:26 np0005466030 nova_compute[230518]: 2025-10-02 13:07:26.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:27.042 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:07:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:27.043 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:07:27 np0005466030 nova_compute[230518]: 2025-10-02 13:07:27.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:27 np0005466030 nova_compute[230518]: 2025-10-02 13:07:27.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:27.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:28 np0005466030 nova_compute[230518]: 2025-10-02 13:07:28.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:28 np0005466030 nova_compute[230518]: 2025-10-02 13:07:28.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:29 np0005466030 nova_compute[230518]: 2025-10-02 13:07:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:29.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:30 np0005466030 nova_compute[230518]: 2025-10-02 13:07:30.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:30.045 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:30 np0005466030 nova_compute[230518]: 2025-10-02 13:07:30.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:30 np0005466030 nova_compute[230518]: 2025-10-02 13:07:30.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:30 np0005466030 podman[302914]: 2025-10-02 13:07:30.813606437 +0000 UTC m=+0.068068218 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:07:30 np0005466030 podman[302915]: 2025-10-02 13:07:30.823673133 +0000 UTC m=+0.074554112 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:07:31 np0005466030 nova_compute[230518]: 2025-10-02 13:07:31.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:31.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:32 np0005466030 nova_compute[230518]: 2025-10-02 13:07:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:07:32 np0005466030 nova_compute[230518]: 2025-10-02 13:07:32.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:07:32 np0005466030 nova_compute[230518]: 2025-10-02 13:07:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:07:32 np0005466030 nova_compute[230518]: 2025-10-02 13:07:32.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:07:32 np0005466030 nova_compute[230518]: 2025-10-02 13:07:32.630 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:07:32 np0005466030 nova_compute[230518]: 2025-10-02 13:07:32.630 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:07:32 np0005466030 nova_compute[230518]: 2025-10-02 13:07:32.630 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:33.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:34 np0005466030 nova_compute[230518]: 2025-10-02 13:07:34.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:35 np0005466030 nova_compute[230518]: 2025-10-02 13:07:35.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:35 np0005466030 nova_compute[230518]: 2025-10-02 13:07:35.531 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:35 np0005466030 nova_compute[230518]: 2025-10-02 13:07:35.552 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:07:35 np0005466030 nova_compute[230518]: 2025-10-02 13:07:35.553 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:07:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:36 np0005466030 nova_compute[230518]: 2025-10-02 13:07:36.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:37.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:39.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:40 np0005466030 nova_compute[230518]: 2025-10-02 13:07:40.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:41 np0005466030 nova_compute[230518]: 2025-10-02 13:07:41.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:41.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:41.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:42 np0005466030 nova_compute[230518]: 2025-10-02 13:07:42.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:43.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:43.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:45 np0005466030 nova_compute[230518]: 2025-10-02 13:07:45.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:45 np0005466030 nova_compute[230518]: 2025-10-02 13:07:45.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:45.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:46 np0005466030 nova_compute[230518]: 2025-10-02 13:07:46.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:47.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:47 np0005466030 nova_compute[230518]: 2025-10-02 13:07:47.912 2 DEBUG nova.compute.manager [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:47 np0005466030 nova_compute[230518]: 2025-10-02 13:07:47.913 2 DEBUG nova.compute.manager [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing instance network info cache due to event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:07:47 np0005466030 nova_compute[230518]: 2025-10-02 13:07:47.913 2 DEBUG oslo_concurrency.lockutils [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:07:47 np0005466030 nova_compute[230518]: 2025-10-02 13:07:47.913 2 DEBUG oslo_concurrency.lockutils [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:07:47 np0005466030 nova_compute[230518]: 2025-10-02 13:07:47.914 2 DEBUG nova.network.neutron [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:07:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.043 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.044 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.044 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.045 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.045 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.046 2 INFO nova.compute.manager [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Terminating instance#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.047 2 DEBUG nova.compute.manager [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005466030 kernel: tapfd38dd09-d0 (unregistering): left promiscuous mode
Oct  2 09:07:48 np0005466030 NetworkManager[44960]: <info>  [1759410468.4255] device (tapfd38dd09-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005466030 ovn_controller[129257]: 2025-10-02T13:07:48Z|00788|binding|INFO|Releasing lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 from this chassis (sb_readonly=0)
Oct  2 09:07:48 np0005466030 ovn_controller[129257]: 2025-10-02T13:07:48Z|00789|binding|INFO|Setting lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 down in Southbound
Oct  2 09:07:48 np0005466030 ovn_controller[129257]: 2025-10-02T13:07:48Z|00790|binding|INFO|Removing iface tapfd38dd09-d0 ovn-installed in OVS
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.446 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:11:63 10.100.0.10'], port_security=['fa:16:3e:40:11:63 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aaa891aa-5701-4706-b86f-6216b8cf4c6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c3c-d440-4dcd-8562-bb1990277f07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c17ab1a8-d19d-4728-b5e8-5f12f979e5d3 f03b5452-680c-4498-87d9-e083abe84e44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4012d9db-84cc-44d4-8e0c-304e52c3ea33, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.448 138374 INFO neutron.agent.ovn.metadata.agent [-] Port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 in datapath 0de30c3c-d440-4dcd-8562-bb1990277f07 unbound from our chassis#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.450 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0de30c3c-d440-4dcd-8562-bb1990277f07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.452 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[268c48f1-8d38-4360-9cef-16df8bb74bed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.453 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 namespace which is not needed anymore#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005466030 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Oct  2 09:07:48 np0005466030 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Consumed 15.995s CPU time.
Oct  2 09:07:48 np0005466030 systemd-machined[188247]: Machine qemu-89-instance-000000bf terminated.
Oct  2 09:07:48 np0005466030 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : haproxy version is 2.8.14-c23fe91
Oct  2 09:07:48 np0005466030 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : path to executable is /usr/sbin/haproxy
Oct  2 09:07:48 np0005466030 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [ALERT]    (302545) : Current worker (302547) exited with code 143 (Terminated)
Oct  2 09:07:48 np0005466030 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [WARNING]  (302545) : All workers exited. Exiting... (0)
Oct  2 09:07:48 np0005466030 systemd[1]: libpod-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708.scope: Deactivated successfully.
Oct  2 09:07:48 np0005466030 podman[302981]: 2025-10-02 13:07:48.65892518 +0000 UTC m=+0.063098162 container died 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.699 2 INFO nova.virt.libvirt.driver [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance destroyed successfully.#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.700 2 DEBUG nova.objects.instance [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:07:48 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708-userdata-shm.mount: Deactivated successfully.
Oct  2 09:07:48 np0005466030 systemd[1]: var-lib-containers-storage-overlay-b35f09704061bc163420de8e40766fdfa5b7dc927df56828247008e161b62606-merged.mount: Deactivated successfully.
Oct  2 09:07:48 np0005466030 podman[302981]: 2025-10-02 13:07:48.717533681 +0000 UTC m=+0.121706663 container cleanup 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 09:07:48 np0005466030 systemd[1]: libpod-conmon-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708.scope: Deactivated successfully.
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.743 2 DEBUG nova.virt.libvirt.vif [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=191,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-aaf9gv0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:06:35Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=aaa891aa-5701-4706-b86f-6216b8cf4c6d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.744 2 DEBUG nova.network.os_vif_util [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.745 2 DEBUG nova.network.os_vif_util [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.746 2 DEBUG os_vif [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd38dd09-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.761 2 INFO os_vif [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0')#033[00m
Oct  2 09:07:48 np0005466030 podman[303019]: 2025-10-02 13:07:48.796529461 +0000 UTC m=+0.047724259 container remove 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40411305-7687-4ca1-b6a7-75949cf84dc4]: (4, ('Thu Oct  2 01:07:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 (7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708)\n7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708\nThu Oct  2 01:07:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 (7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708)\n7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.805 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[10087e24-dc0f-4c77-b575-8422041c103a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.806 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c3c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005466030 kernel: tap0de30c3c-d0: left promiscuous mode
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.824 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fad489-6fa3-4f78-a0cd-fa4e381fea64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.849 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[518642b4-b955-4374-99b0-48c4027d20ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.851 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46739500-18ab-4474-bd87-4d038278331c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.871 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6eff55-125b-45ee-a28e-914106428820]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825670, 'reachable_time': 18162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303052, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.873 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:07:48 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.873 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaae421-2573-4e1a-b17c-ad12607973c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:07:48 np0005466030 systemd[1]: run-netns-ovnmeta\x2d0de30c3c\x2dd440\x2d4dcd\x2d8562\x2dbb1990277f07.mount: Deactivated successfully.
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.951 2 DEBUG nova.compute.manager [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-unplugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.952 2 DEBUG oslo_concurrency.lockutils [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG oslo_concurrency.lockutils [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG oslo_concurrency.lockutils [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG nova.compute.manager [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] No waiting events found dispatching network-vif-unplugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:48 np0005466030 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG nova.compute.manager [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-unplugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:07:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:49.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.163 2 DEBUG nova.network.neutron [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated VIF entry in instance network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.163 2 DEBUG nova.network.neutron [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.198 2 DEBUG oslo_concurrency.lockutils [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.500 2 INFO nova.virt.libvirt.driver [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deleting instance files /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d_del#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.501 2 INFO nova.virt.libvirt.driver [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deletion of /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d_del complete#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.557 2 INFO nova.compute.manager [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 2.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.558 2 DEBUG oslo.service.loopingcall [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.558 2 DEBUG nova.compute.manager [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:07:50 np0005466030 nova_compute[230518]: 2025-10-02 13:07:50.558 2 DEBUG nova.network.neutron [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:07:50 np0005466030 podman[303054]: 2025-10-02 13:07:50.788013686 +0000 UTC m=+0.046149621 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:07:50 np0005466030 podman[303073]: 2025-10-02 13:07:50.879954002 +0000 UTC m=+0.067198351 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:07:51 np0005466030 nova_compute[230518]: 2025-10-02 13:07:51.151 2 DEBUG nova.compute.manager [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:51 np0005466030 nova_compute[230518]: 2025-10-02 13:07:51.151 2 DEBUG oslo_concurrency.lockutils [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:51 np0005466030 nova_compute[230518]: 2025-10-02 13:07:51.152 2 DEBUG oslo_concurrency.lockutils [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:51 np0005466030 nova_compute[230518]: 2025-10-02 13:07:51.152 2 DEBUG oslo_concurrency.lockutils [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:51 np0005466030 nova_compute[230518]: 2025-10-02 13:07:51.152 2 DEBUG nova.compute.manager [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] No waiting events found dispatching network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:07:51 np0005466030 nova_compute[230518]: 2025-10-02 13:07:51.152 2 WARNING nova.compute.manager [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received unexpected event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:07:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:51.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:51.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:52 np0005466030 nova_compute[230518]: 2025-10-02 13:07:52.676 2 DEBUG nova.network.neutron [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:07:52 np0005466030 nova_compute[230518]: 2025-10-02 13:07:52.701 2 INFO nova.compute.manager [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 2.14 seconds to deallocate network for instance.#033[00m
Oct  2 09:07:52 np0005466030 nova_compute[230518]: 2025-10-02 13:07:52.758 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:07:52 np0005466030 nova_compute[230518]: 2025-10-02 13:07:52.758 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:07:52 np0005466030 nova_compute[230518]: 2025-10-02 13:07:52.767 2 DEBUG nova.compute.manager [req-82743856-74a0-423c-916c-f7338ce6e985 req-7c663f5b-5872-4a3f-b2e0-e3a63b7068f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-deleted-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:07:52 np0005466030 nova_compute[230518]: 2025-10-02 13:07:52.818 2 DEBUG oslo_concurrency.processutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:07:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4249213821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.290 2 DEBUG oslo_concurrency.processutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.296 2 DEBUG nova.compute.provider_tree [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.332 2 DEBUG nova.scheduler.client.report [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.397 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.444 2 INFO nova.scheduler.client.report [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance aaa891aa-5701-4706-b86f-6216b8cf4c6d#033[00m
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.565 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:07:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:53.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:53 np0005466030 nova_compute[230518]: 2025-10-02 13:07:53.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:53.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:55 np0005466030 nova_compute[230518]: 2025-10-02 13:07:55.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:55.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:55.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:07:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:57.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:07:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:58 np0005466030 nova_compute[230518]: 2025-10-02 13:07:58.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:07:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:07:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:59.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:07:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:07:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:07:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:59.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:00 np0005466030 nova_compute[230518]: 2025-10-02 13:08:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:01.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:01 np0005466030 podman[303122]: 2025-10-02 13:08:01.844506494 +0000 UTC m=+0.085849997 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  2 09:08:01 np0005466030 podman[303123]: 2025-10-02 13:08:01.84405474 +0000 UTC m=+0.078277189 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:08:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.149 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.149 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.173 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.276 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.277 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.286 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.286 2 INFO nova.compute.claims [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.425 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.695 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410468.6932428, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.696 2 INFO nova.compute.manager [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.719 2 DEBUG nova.compute.manager [None req-472b5e42-9a33-4dbc-b3ef-77d70ab338cb - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:03.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/90573178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.883 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.888 2 DEBUG nova.compute.provider_tree [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.953 2 DEBUG nova.scheduler.client.report [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.985 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:03 np0005466030 nova_compute[230518]: 2025-10-02 13:08:03.985 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.120 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.120 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.173 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.255 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.380 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.381 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.381 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating image(s)#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.404 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.433 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.459 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.462 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.499 2 DEBUG nova.policy [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '62f4c4b5cc194bd59ca9cc9f1da78a79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '954946ff6b204fba90f767ec67210620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.524 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.525 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.526 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.526 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.547 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.551 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.817 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.876 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] resizing rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:08:04 np0005466030 nova_compute[230518]: 2025-10-02 13:08:04.988 2 DEBUG nova.objects.instance [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'migration_context' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:05 np0005466030 nova_compute[230518]: 2025-10-02 13:08:05.011 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:08:05 np0005466030 nova_compute[230518]: 2025-10-02 13:08:05.012 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Ensure instance console log exists: /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:08:05 np0005466030 nova_compute[230518]: 2025-10-02 13:08:05.013 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:05 np0005466030 nova_compute[230518]: 2025-10-02 13:08:05.013 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:05 np0005466030 nova_compute[230518]: 2025-10-02 13:08:05.014 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:05 np0005466030 nova_compute[230518]: 2025-10-02 13:08:05.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:06 np0005466030 nova_compute[230518]: 2025-10-02 13:08:06.210 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Successfully created port: 3994280c-c2c8-4fa7-bc48-f7b048d43015 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:08:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:07.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.310 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Successfully updated port: 3994280c-c2c8-4fa7-bc48-f7b048d43015 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.331 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.331 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.332 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.597 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.928 2 DEBUG nova.compute.manager [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.928 2 DEBUG nova.compute.manager [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:08 np0005466030 nova_compute[230518]: 2025-10-02 13:08:08.928 2 DEBUG oslo_concurrency.lockutils [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.065 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.088 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.088 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance network_info: |[{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.089 2 DEBUG oslo_concurrency.lockutils [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.089 2 DEBUG nova.network.neutron [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.093 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start _get_guest_xml network_info=[{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.098 2 WARNING nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.103 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.104 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.109 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.109 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.111 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.111 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.111 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.116 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/70365932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.564 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.588 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:10 np0005466030 nova_compute[230518]: 2025-10-02 13:08:10.592 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1373192062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.019 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.021 2 DEBUG nova.virt.libvirt.vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.021 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.022 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.023 2 DEBUG nova.objects.instance [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.041 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <uuid>b4640b6e-b1e0-4168-9970-c5d05a0e1621</uuid>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <name>instance-000000c1</name>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestShelveInstance-server-1341299623</nova:name>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:08:10</nova:creationTime>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:user uuid="62f4c4b5cc194bd59ca9cc9f1da78a79">tempest-TestShelveInstance-228669170-project-member</nova:user>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:project uuid="954946ff6b204fba90f767ec67210620">tempest-TestShelveInstance-228669170</nova:project>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <nova:port uuid="3994280c-c2c8-4fa7-bc48-f7b048d43015">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <entry name="serial">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <entry name="uuid">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:20:d8:5a"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <target dev="tap3994280c-c2"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log" append="off"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:08:11 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:08:11 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:08:11 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:08:11 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Preparing to wait for external event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.044 2 DEBUG nova.virt.libvirt.vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.044 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.045 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.045 2 DEBUG os_vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3994280c-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3994280c-c2, col_values=(('external_ids', {'iface-id': '3994280c-c2c8-4fa7-bc48-f7b048d43015', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:d8:5a', 'vm-uuid': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:11 np0005466030 NetworkManager[44960]: <info>  [1759410491.0526] manager: (tap3994280c-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.057 2 INFO os_vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.111 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.112 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.112 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No VIF found with MAC fa:16:3e:20:d8:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.112 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Using config drive#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.140 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.672 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating config drive at /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.680 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84dibfsr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:11.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.829 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84dibfsr" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.864 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:11 np0005466030 nova_compute[230518]: 2025-10-02 13:08:11.868 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.062 2 DEBUG nova.network.neutron [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.063 2 DEBUG nova.network.neutron [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.066 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.066 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting local config drive /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config because it was imported into RBD.#033[00m
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.090 2 DEBUG oslo_concurrency.lockutils [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:12 np0005466030 kernel: tap3994280c-c2: entered promiscuous mode
Oct  2 09:08:12 np0005466030 NetworkManager[44960]: <info>  [1759410492.1134] manager: (tap3994280c-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:12Z|00791|binding|INFO|Claiming lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 for this chassis.
Oct  2 09:08:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:12Z|00792|binding|INFO|3994280c-c2c8-4fa7-bc48-f7b048d43015: Claiming fa:16:3e:20:d8:5a 10.100.0.6
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.126 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.127 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 bound to our chassis#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.128 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4223a8cc-f72a-428d-accb-3f4210096878#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.140 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[464d609d-e5ec-4db6-b356-36c33111689a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.141 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4223a8cc-f1 in ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.142 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4223a8cc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.143 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a95e44d-6dbb-4fbf-9605-baf401fc7973]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 systemd-udevd[303489]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.146 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95091073-55b1-4981-b2bf-da474413c89f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.158 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[be5af1e5-f7c5-43f5-b261-00e3d4272e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 systemd-machined[188247]: New machine qemu-90-instance-000000c1.
Oct  2 09:08:12 np0005466030 NetworkManager[44960]: <info>  [1759410492.1662] device (tap3994280c-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:08:12 np0005466030 NetworkManager[44960]: <info>  [1759410492.1681] device (tap3994280c-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 systemd[1]: Started Virtual Machine qemu-90-instance-000000c1.
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.181 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4eba22-fd0a-4c42-8af1-2f533b06865e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:12Z|00793|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 ovn-installed in OVS
Oct  2 09:08:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:12Z|00794|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 up in Southbound
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.212 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5183f27f-39b4-41fb-acf8-aae54ff77c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.216 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a46d5653-61cd-45ac-8074-04f5465e3d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 NetworkManager[44960]: <info>  [1759410492.2192] manager: (tap4223a8cc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.249 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[638f1c24-eb97-4a36-bf0a-89848a0e9d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.251 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[80bd4acc-cbcf-42a1-94bf-3e97b1f8c1b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 NetworkManager[44960]: <info>  [1759410492.2708] device (tap4223a8cc-f0): carrier: link connected
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.275 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a9beca03-bfce-403c-aba0-4b8912c2e363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.289 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60b9d446-1b35-4cc2-b038-b3eef08450b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835582, 'reachable_time': 27641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303522, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e425e4-49a0-4591-ad0e-893092422640]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f568'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835582, 'tstamp': 835582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303523, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.320 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44254b1d-03e4-406b-b031-1d412cc35cda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835582, 'reachable_time': 27641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303524, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a852d09c-277f-4a3a-9fed-caec45b2468c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.401 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05761391-16b8-4cc1-8d3a-1100b86dbaaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.402 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.403 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.403 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4223a8cc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 NetworkManager[44960]: <info>  [1759410492.4059] manager: (tap4223a8cc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Oct  2 09:08:12 np0005466030 kernel: tap4223a8cc-f0: entered promiscuous mode
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.411 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4223a8cc-f0, col_values=(('external_ids', {'iface-id': '97eaefd1-ed23-4787-9782-741cd2cf7e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:12Z|00795|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.416 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.416 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd894d39-663e-4c36-9daf-0c1156c6a5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.417 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-4223a8cc-f72a-428d-accb-3f4210096878
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 4223a8cc-f72a-428d-accb-3f4210096878
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:08:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.419 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'env', 'PROCESS_TAG=haproxy-4223a8cc-f72a-428d-accb-3f4210096878', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4223a8cc-f72a-428d-accb-3f4210096878.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:08:12 np0005466030 nova_compute[230518]: 2025-10-02 13:08:12.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:12 np0005466030 podman[303598]: 2025-10-02 13:08:12.757599508 +0000 UTC m=+0.053362306 container create a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 09:08:12 np0005466030 systemd[1]: Started libpod-conmon-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991.scope.
Oct  2 09:08:12 np0005466030 podman[303598]: 2025-10-02 13:08:12.729439045 +0000 UTC m=+0.025201833 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:08:12 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:08:12 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eedb82cd50d6ea7a9033c769ee0024227330ee31a67a92bbebc2e547b0c3439/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:08:12 np0005466030 podman[303598]: 2025-10-02 13:08:12.866511479 +0000 UTC m=+0.162274267 container init a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:08:12 np0005466030 podman[303598]: 2025-10-02 13:08:12.872580369 +0000 UTC m=+0.168343147 container start a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:08:12 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : New worker (303619) forked
Oct  2 09:08:12 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : Loading success.
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.002 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410493.0019891, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.003 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Started (Lifecycle Event)#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.059 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.064 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410493.0024707, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.064 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.094 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.098 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.143 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.349 2 DEBUG nova.compute.manager [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.349 2 DEBUG oslo_concurrency.lockutils [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.350 2 DEBUG oslo_concurrency.lockutils [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.350 2 DEBUG oslo_concurrency.lockutils [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.351 2 DEBUG nova.compute.manager [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Processing event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.352 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.356 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410493.3560207, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.356 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.358 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.362 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance spawned successfully.#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.362 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.379 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.386 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.391 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.391 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.391 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.392 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.392 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.393 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.433 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.496 2 INFO nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 9.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.497 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.579 2 INFO nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 10.34 seconds to build instance.#033[00m
Oct  2 09:08:13 np0005466030 nova_compute[230518]: 2025-10-02 13:08:13.604 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:13.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:08:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2421613884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:08:15 np0005466030 nova_compute[230518]: 2025-10-02 13:08:15.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:15 np0005466030 nova_compute[230518]: 2025-10-02 13:08:15.484 2 DEBUG nova.compute.manager [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:15 np0005466030 nova_compute[230518]: 2025-10-02 13:08:15.484 2 DEBUG oslo_concurrency.lockutils [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:15 np0005466030 nova_compute[230518]: 2025-10-02 13:08:15.485 2 DEBUG oslo_concurrency.lockutils [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:15 np0005466030 nova_compute[230518]: 2025-10-02 13:08:15.485 2 DEBUG oslo_concurrency.lockutils [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:15 np0005466030 nova_compute[230518]: 2025-10-02 13:08:15.485 2 DEBUG nova.compute.manager [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:08:15 np0005466030 nova_compute[230518]: 2025-10-02 13:08:15.485 2 WARNING nova.compute.manager [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:08:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:15.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:16 np0005466030 nova_compute[230518]: 2025-10-02 13:08:16.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:17.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:17.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:18 np0005466030 NetworkManager[44960]: <info>  [1759410498.0140] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Oct  2 09:08:18 np0005466030 NetworkManager[44960]: <info>  [1759410498.0149] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:18Z|00796|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.513 2 DEBUG nova.compute.manager [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.513 2 DEBUG nova.compute.manager [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.514 2 DEBUG oslo_concurrency.lockutils [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.514 2 DEBUG oslo_concurrency.lockutils [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:18 np0005466030 nova_compute[230518]: 2025-10-02 13:08:18.514 2 DEBUG nova.network.neutron [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:19.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:20 np0005466030 nova_compute[230518]: 2025-10-02 13:08:20.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:20 np0005466030 nova_compute[230518]: 2025-10-02 13:08:20.191 2 DEBUG nova.network.neutron [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:20 np0005466030 nova_compute[230518]: 2025-10-02 13:08:20.191 2 DEBUG nova.network.neutron [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:20 np0005466030 nova_compute[230518]: 2025-10-02 13:08:20.357 2 DEBUG oslo_concurrency.lockutils [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:21 np0005466030 nova_compute[230518]: 2025-10-02 13:08:21.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:21.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:21 np0005466030 podman[303631]: 2025-10-02 13:08:21.82513096 +0000 UTC m=+0.064631111 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:08:21 np0005466030 podman[303630]: 2025-10-02 13:08:21.861618245 +0000 UTC m=+0.093926530 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct  2 09:08:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.078 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.078 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.079 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.079 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1332190033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.585 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.660 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.661 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.880 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4093MB free_disk=20.961849212646484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.883 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.961 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance b4640b6e-b1e0-4168-9970-c5d05a0e1621 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.961 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.962 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:08:22 np0005466030 nova_compute[230518]: 2025-10-02 13:08:22.997 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3766980705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:23 np0005466030 nova_compute[230518]: 2025-10-02 13:08:23.463 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:23 np0005466030 nova_compute[230518]: 2025-10-02 13:08:23.474 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:23 np0005466030 nova_compute[230518]: 2025-10-02 13:08:23.491 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:23 np0005466030 nova_compute[230518]: 2025-10-02 13:08:23.511 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:08:23 np0005466030 nova_compute[230518]: 2025-10-02 13:08:23.511 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:08:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:08:23 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:08:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:23.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:08:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:23.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:08:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:25 np0005466030 nova_compute[230518]: 2025-10-02 13:08:25.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:25.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:25.964 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:25.964 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:08:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:08:26 np0005466030 nova_compute[230518]: 2025-10-02 13:08:26.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:27 np0005466030 nova_compute[230518]: 2025-10-02 13:08:27.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:27 np0005466030 nova_compute[230518]: 2025-10-02 13:08:27.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:27 np0005466030 nova_compute[230518]: 2025-10-02 13:08:27.513 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:08:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:27.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:28 np0005466030 nova_compute[230518]: 2025-10-02 13:08:28.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:28 np0005466030 nova_compute[230518]: 2025-10-02 13:08:28.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:28 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:28Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:d8:5a 10.100.0.6
Oct  2 09:08:28 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:28Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:d8:5a 10.100.0.6
Oct  2 09:08:29 np0005466030 nova_compute[230518]: 2025-10-02 13:08:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:29.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:29 np0005466030 nova_compute[230518]: 2025-10-02 13:08:29.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:29.910 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:29.913 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:08:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:29.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:30 np0005466030 nova_compute[230518]: 2025-10-02 13:08:30.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:31 np0005466030 nova_compute[230518]: 2025-10-02 13:08:31.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:31 np0005466030 nova_compute[230518]: 2025-10-02 13:08:31.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:31 np0005466030 nova_compute[230518]: 2025-10-02 13:08:31.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:32 np0005466030 nova_compute[230518]: 2025-10-02 13:08:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:32 np0005466030 nova_compute[230518]: 2025-10-02 13:08:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:08:32 np0005466030 nova_compute[230518]: 2025-10-02 13:08:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:08:32 np0005466030 nova_compute[230518]: 2025-10-02 13:08:32.261 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:32 np0005466030 nova_compute[230518]: 2025-10-02 13:08:32.261 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:32 np0005466030 nova_compute[230518]: 2025-10-02 13:08:32.262 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:08:32 np0005466030 nova_compute[230518]: 2025-10-02 13:08:32.262 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:32 np0005466030 podman[303849]: 2025-10-02 13:08:32.8568751 +0000 UTC m=+0.097468061 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:08:32 np0005466030 podman[303850]: 2025-10-02 13:08:32.898626701 +0000 UTC m=+0.138169190 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:08:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:33.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:35 np0005466030 nova_compute[230518]: 2025-10-02 13:08:35.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:35 np0005466030 nova_compute[230518]: 2025-10-02 13:08:35.432 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:35 np0005466030 nova_compute[230518]: 2025-10-02 13:08:35.449 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:35 np0005466030 nova_compute[230518]: 2025-10-02 13:08:35.449 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:08:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:35.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:36 np0005466030 nova_compute[230518]: 2025-10-02 13:08:36.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:37 np0005466030 nova_compute[230518]: 2025-10-02 13:08:37.085 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:37 np0005466030 nova_compute[230518]: 2025-10-02 13:08:37.086 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:37 np0005466030 nova_compute[230518]: 2025-10-02 13:08:37.086 2 INFO nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Shelving#033[00m
Oct  2 09:08:37 np0005466030 nova_compute[230518]: 2025-10-02 13:08:37.105 2 DEBUG nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:08:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:37.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:38 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:38.916 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:39.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:40.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:08:40 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:08:40 np0005466030 nova_compute[230518]: 2025-10-02 13:08:40.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:40 np0005466030 kernel: tap3994280c-c2 (unregistering): left promiscuous mode
Oct  2 09:08:40 np0005466030 NetworkManager[44960]: <info>  [1759410520.8636] device (tap3994280c-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:08:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:40Z|00797|binding|INFO|Releasing lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 from this chassis (sb_readonly=0)
Oct  2 09:08:40 np0005466030 nova_compute[230518]: 2025-10-02 13:08:40.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:40Z|00798|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 down in Southbound
Oct  2 09:08:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:08:40Z|00799|binding|INFO|Removing iface tap3994280c-c2 ovn-installed in OVS
Oct  2 09:08:40 np0005466030 nova_compute[230518]: 2025-10-02 13:08:40.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:40 np0005466030 nova_compute[230518]: 2025-10-02 13:08:40.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.934 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:08:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.936 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 unbound from our chassis#033[00m
Oct  2 09:08:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.939 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4223a8cc-f72a-428d-accb-3f4210096878, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:08:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.940 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4148c109-86b5-48c2-88f6-ef6168f768a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.941 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace which is not needed anymore#033[00m
Oct  2 09:08:40 np0005466030 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Oct  2 09:08:40 np0005466030 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c1.scope: Consumed 14.571s CPU time.
Oct  2 09:08:40 np0005466030 systemd-machined[188247]: Machine qemu-90-instance-000000c1 terminated.
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:41 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : haproxy version is 2.8.14-c23fe91
Oct  2 09:08:41 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : path to executable is /usr/sbin/haproxy
Oct  2 09:08:41 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [WARNING]  (303617) : Exiting Master process...
Oct  2 09:08:41 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [WARNING]  (303617) : Exiting Master process...
Oct  2 09:08:41 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [ALERT]    (303617) : Current worker (303619) exited with code 143 (Terminated)
Oct  2 09:08:41 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [WARNING]  (303617) : All workers exited. Exiting... (0)
Oct  2 09:08:41 np0005466030 systemd[1]: libpod-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991.scope: Deactivated successfully.
Oct  2 09:08:41 np0005466030 podman[303965]: 2025-10-02 13:08:41.116545681 +0000 UTC m=+0.073114236 container died a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.161 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance shutdown successfully after 4 seconds.#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.169 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.170 2 DEBUG nova.objects.instance [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'numa_topology' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.185 2 DEBUG nova.compute.manager [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.186 2 DEBUG oslo_concurrency.lockutils [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.186 2 DEBUG oslo_concurrency.lockutils [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.187 2 DEBUG oslo_concurrency.lockutils [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:41 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991-userdata-shm.mount: Deactivated successfully.
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.187 2 DEBUG nova.compute.manager [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.188 2 WARNING nova.compute.manager [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state shelving.#033[00m
Oct  2 09:08:41 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7eedb82cd50d6ea7a9033c769ee0024227330ee31a67a92bbebc2e547b0c3439-merged.mount: Deactivated successfully.
Oct  2 09:08:41 np0005466030 podman[303965]: 2025-10-02 13:08:41.247691249 +0000 UTC m=+0.204259794 container cleanup a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:08:41 np0005466030 systemd[1]: libpod-conmon-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991.scope: Deactivated successfully.
Oct  2 09:08:41 np0005466030 podman[304007]: 2025-10-02 13:08:41.399944911 +0000 UTC m=+0.117193282 container remove a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.407 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0a2d1e-7639-4120-8609-577972d5433a]: (4, ('Thu Oct  2 01:08:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991)\na942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991\nThu Oct  2 01:08:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991)\na942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.408 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7165933-d276-44d4-9e71-958dbb923fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.409 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:41 np0005466030 kernel: tap4223a8cc-f0: left promiscuous mode
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.435 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3936b692-9e63-45ba-97d8-ace66fe93d5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.465 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c22ca51d-6ed9-446b-946c-86285d3dff42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.466 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2eee297b-4c44-4ae6-87a8-d4a481cf6d3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.486 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40c1b003-c581-4056-b308-ef3ea44326b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835576, 'reachable_time': 42302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304027, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:41 np0005466030 systemd[1]: run-netns-ovnmeta\x2d4223a8cc\x2df72a\x2d428d\x2daccb\x2d3f4210096878.mount: Deactivated successfully.
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.491 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:08:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.492 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[073c0d3e-80ff-4e3a-a6f7-9f0e6d077e45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.700 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Beginning cold snapshot process#033[00m
Oct  2 09:08:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:41.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:41 np0005466030 nova_compute[230518]: 2025-10-02 13:08:41.911 2 DEBUG nova.virt.libvirt.imagebackend [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  2 09:08:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:42 np0005466030 nova_compute[230518]: 2025-10-02 13:08:42.153 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] creating snapshot(ca0adbc11bba40809fd8b2643fe82da3) on rbd image(b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:08:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.333 2 DEBUG nova.compute.manager [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG oslo_concurrency.lockutils [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG oslo_concurrency.lockutils [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG oslo_concurrency.lockutils [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG nova.compute.manager [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.334 2 WARNING nova.compute.manager [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.435 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] cloning vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk@ca0adbc11bba40809fd8b2643fe82da3 to images/c7b69b23-2de1-4a42-80f6-94ba898e82eb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.465 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:08:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:43.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:43 np0005466030 nova_compute[230518]: 2025-10-02 13:08:43.867 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] flattening images/c7b69b23-2de1-4a42-80f6-94ba898e82eb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 09:08:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:45 np0005466030 nova_compute[230518]: 2025-10-02 13:08:45.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:45.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:46 np0005466030 nova_compute[230518]: 2025-10-02 13:08:46.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:47 np0005466030 nova_compute[230518]: 2025-10-02 13:08:47.640 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] removing snapshot(ca0adbc11bba40809fd8b2643fe82da3) on rbd image(b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 09:08:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Oct  2 09:08:48 np0005466030 nova_compute[230518]: 2025-10-02 13:08:48.690 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] creating snapshot(snap) on rbd image(c7b69b23-2de1-4a42-80f6-94ba898e82eb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  2 09:08:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:49.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:50.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:50 np0005466030 nova_compute[230518]: 2025-10-02 13:08:50.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.897484) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530897542, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1259, "num_deletes": 250, "total_data_size": 2639086, "memory_usage": 2682760, "flush_reason": "Manual Compaction"}
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530927733, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 1088282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68385, "largest_seqno": 69639, "table_properties": {"data_size": 1083901, "index_size": 1840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11847, "raw_average_key_size": 20, "raw_value_size": 1074322, "raw_average_value_size": 1901, "num_data_blocks": 82, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410434, "oldest_key_time": 1759410434, "file_creation_time": 1759410530, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 30320 microseconds, and 6925 cpu microseconds.
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.927800) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 1088282 bytes OK
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.927835) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.942835) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.942888) EVENT_LOG_v1 {"time_micros": 1759410530942877, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.942912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2633023, prev total WAL file size 2633023, number of live WAL files 2.
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.943981) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(1062KB)], [138(11MB)]
Oct  2 09:08:50 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530944093, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 13627880, "oldest_snapshot_seqno": -1}
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9173 keys, 10418281 bytes, temperature: kUnknown
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531015684, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10418281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10361359, "index_size": 32873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 240831, "raw_average_key_size": 26, "raw_value_size": 10202859, "raw_average_value_size": 1112, "num_data_blocks": 1250, "num_entries": 9173, "num_filter_entries": 9173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410530, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.015948) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10418281 bytes
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.020064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.2 rd, 145.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(22.1) write-amplify(9.6) OK, records in: 9648, records dropped: 475 output_compression: NoCompression
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.020125) EVENT_LOG_v1 {"time_micros": 1759410531020101, "job": 88, "event": "compaction_finished", "compaction_time_micros": 71656, "compaction_time_cpu_micros": 42192, "output_level": 6, "num_output_files": 1, "total_output_size": 10418281, "num_input_records": 9648, "num_output_records": 9173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531020788, "job": 88, "event": "table_file_deletion", "file_number": 140}
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531025675, "job": 88, "event": "table_file_deletion", "file_number": 138}
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.943794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:51 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:08:51 np0005466030 nova_compute[230518]: 2025-10-02 13:08:51.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:52.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:52 np0005466030 podman[304170]: 2025-10-02 13:08:52.834177906 +0000 UTC m=+0.068148721 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:08:52 np0005466030 podman[304169]: 2025-10-02 13:08:52.847302558 +0000 UTC m=+0.090517373 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.335 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Snapshot image upload complete#033[00m
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.336 2 DEBUG nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.396 2 INFO nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Shelve offloading#033[00m
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.404 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.#033[00m
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.405 2 DEBUG nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.408 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.408 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:53 np0005466030 nova_compute[230518]: 2025-10-02 13:08:53.408 2 DEBUG nova.network.neutron [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:08:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:54.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:55 np0005466030 nova_compute[230518]: 2025-10-02 13:08:55.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:55 np0005466030 nova_compute[230518]: 2025-10-02 13:08:55.495 2 DEBUG nova.network.neutron [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Oct  2 09:08:55 np0005466030 nova_compute[230518]: 2025-10-02 13:08:55.524 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:55.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:56.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.163 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410521.1617134, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.164 2 INFO nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.199 2 DEBUG nova.compute.manager [None req-fd3c069e-8535-4973-ac19-ee02d980d737 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.202 2 DEBUG nova.compute.manager [None req-fd3c069e-8535-4973-ac19-ee02d980d737 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.222 2 INFO nova.compute.manager [None req-fd3c069e-8535-4973-ac19-ee02d980d737 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.493 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.493 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.513 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.649 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.650 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.657 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.657 2 INFO nova.compute.claims [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:08:56 np0005466030 nova_compute[230518]: 2025-10-02 13:08:56.934 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.221 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.222 2 DEBUG nova.objects.instance [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'resources' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.239 2 DEBUG nova.virt.libvirt.vif [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member',shelved_at='2025-10-02T13:08:53.336076',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c7b69b23-2de1-4a42-80f6-94ba898e82eb'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:41Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.239 2 DEBUG nova.network.os_vif_util [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.240 2 DEBUG nova.network.os_vif_util [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.241 2 DEBUG os_vif [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.243 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3994280c-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.247 2 INFO os_vif [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.302 2 DEBUG nova.compute.manager [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.302 2 DEBUG nova.compute.manager [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.303 2 DEBUG oslo_concurrency.lockutils [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.303 2 DEBUG oslo_concurrency.lockutils [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.303 2 DEBUG nova.network.neutron [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:08:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:08:57 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2061486174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.522 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.528 2 DEBUG nova.compute.provider_tree [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.548 2 DEBUG nova.scheduler.client.report [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.577 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.578 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.625 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.625 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.641 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.656 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.729 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.730 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.731 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Creating image(s)#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.770 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.800 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.826 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.830 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.900 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.902 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.902 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.903 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.930 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:08:57 np0005466030 nova_compute[230518]: 2025-10-02 13:08:57.933 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:08:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:58.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:08:58 np0005466030 nova_compute[230518]: 2025-10-02 13:08:58.261 2 DEBUG nova.policy [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:08:58 np0005466030 nova_compute[230518]: 2025-10-02 13:08:58.872 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.939s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:08:58 np0005466030 nova_compute[230518]: 2025-10-02 13:08:58.937 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] resizing rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.032 2 DEBUG nova.network.neutron [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.033 2 DEBUG nova.network.neutron [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": null, "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3994280c-c2", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.040 2 DEBUG nova.objects.instance [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.053 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.053 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Ensure instance console log exists: /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.054 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.054 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.054 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.055 2 DEBUG oslo_concurrency.lockutils [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.102 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Successfully created port: 15cb070c-0f52-464f-a2b4-8597c15212e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.495 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting instance files /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.496 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deletion of /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del complete#033[00m
Oct  2 09:08:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.583 2 INFO nova.scheduler.client.report [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Deleted allocations for instance b4640b6e-b1e0-4168-9970-c5d05a0e1621#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.627 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.627 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:08:59 np0005466030 nova_compute[230518]: 2025-10-02 13:08:59.670 2 DEBUG oslo_concurrency.processutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:08:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:08:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:08:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:59.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:00.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3712778003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.093 2 DEBUG oslo_concurrency.processutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.099 2 DEBUG nova.compute.provider_tree [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.116 2 DEBUG nova.scheduler.client.report [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.136 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.176 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 23.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.290 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Successfully updated port: 15cb070c-0f52-464f-a2b4-8597c15212e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.303 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.304 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.304 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.456 2 DEBUG nova.compute.manager [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.457 2 DEBUG nova.compute.manager [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing instance network info cache due to event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.457 2 DEBUG oslo_concurrency.lockutils [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:00 np0005466030 nova_compute[230518]: 2025-10-02 13:09:00.478 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.510 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.532 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.533 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance network_info: |[{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.533 2 DEBUG oslo_concurrency.lockutils [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.533 2 DEBUG nova.network.neutron [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.536 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start _get_guest_xml network_info=[{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.541 2 WARNING nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.545 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.545 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.548 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.548 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.549 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.552 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.552 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.552 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.555 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/800651783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:01 np0005466030 nova_compute[230518]: 2025-10-02 13:09:01.988 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.009 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.013 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:02.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/448831348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.601 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.603 2 DEBUG nova.virt.libvirt.vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=197,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-51wwuied',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=658821a7-5b97-43ad-8fe2-46e5303cf56c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.604 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.605 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.607 2 DEBUG nova.objects.instance [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.626 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <uuid>658821a7-5b97-43ad-8fe2-46e5303cf56c</uuid>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <name>instance-000000c5</name>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <nova:name>multiattach-server-1</nova:name>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:09:01</nova:creationTime>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <nova:port uuid="15cb070c-0f52-464f-a2b4-8597c15212e9">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <entry name="serial">658821a7-5b97-43ad-8fe2-46e5303cf56c</entry>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <entry name="uuid">658821a7-5b97-43ad-8fe2-46e5303cf56c</entry>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/658821a7-5b97-43ad-8fe2-46e5303cf56c_disk">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:e2:47:21"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <target dev="tap15cb070c-0f"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/console.log" append="off"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:09:02 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:09:02 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:09:02 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:09:02 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.627 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Preparing to wait for external event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.628 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.628 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.629 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.630 2 DEBUG nova.virt.libvirt.vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=197,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-51wwuied',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=658821a7-5b97-43ad-8fe2-46e5303cf56c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.630 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.631 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.632 2 DEBUG os_vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15cb070c-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15cb070c-0f, col_values=(('external_ids', {'iface-id': '15cb070c-0f52-464f-a2b4-8597c15212e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:47:21', 'vm-uuid': '658821a7-5b97-43ad-8fe2-46e5303cf56c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:02 np0005466030 NetworkManager[44960]: <info>  [1759410542.6777] manager: (tap15cb070c-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.682 2 INFO os_vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f')#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.727 2 DEBUG nova.network.neutron [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated VIF entry in instance network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.728 2 DEBUG nova.network.neutron [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.760 2 DEBUG oslo_concurrency.lockutils [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.875 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.876 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.876 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:e2:47:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.877 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Using config drive#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.905 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.950 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.951 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:02 np0005466030 nova_compute[230518]: 2025-10-02 13:09:02.951 2 INFO nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Unshelving#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.044 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.045 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.051 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_requests' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.064 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'numa_topology' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.075 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.075 2 INFO nova.compute.claims [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.180 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.339 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Creating config drive at /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.346 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8wc6ilf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.494 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8wc6ilf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2060950700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.676 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.680 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.709 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.716 2 DEBUG nova.compute.provider_tree [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.731 2 DEBUG nova.scheduler.client.report [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:03 np0005466030 nova_compute[230518]: 2025-10-02 13:09:03.757 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:03 np0005466030 podman[304571]: 2025-10-02 13:09:03.817227106 +0000 UTC m=+0.059708726 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 09:09:03 np0005466030 podman[304579]: 2025-10-02 13:09:03.835151399 +0000 UTC m=+0.067242782 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:09:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:03.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:04 np0005466030 nova_compute[230518]: 2025-10-02 13:09:04.009 2 INFO nova.network.neutron [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating port 3994280c-c2c8-4fa7-bc48-f7b048d43015 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  2 09:09:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:04.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:04 np0005466030 nova_compute[230518]: 2025-10-02 13:09:04.782 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:04 np0005466030 nova_compute[230518]: 2025-10-02 13:09:04.783 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deleting local config drive /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config because it was imported into RBD.#033[00m
Oct  2 09:09:04 np0005466030 kernel: tap15cb070c-0f: entered promiscuous mode
Oct  2 09:09:04 np0005466030 NetworkManager[44960]: <info>  [1759410544.8480] manager: (tap15cb070c-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Oct  2 09:09:04 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:04Z|00800|binding|INFO|Claiming lport 15cb070c-0f52-464f-a2b4-8597c15212e9 for this chassis.
Oct  2 09:09:04 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:04Z|00801|binding|INFO|15cb070c-0f52-464f-a2b4-8597c15212e9: Claiming fa:16:3e:e2:47:21 10.100.0.3
Oct  2 09:09:04 np0005466030 nova_compute[230518]: 2025-10-02 13:09:04.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.875 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:47:21 10.100.0.3'], port_security=['fa:16:3e:e2:47:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '658821a7-5b97-43ad-8fe2-46e5303cf56c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=15cb070c-0f52-464f-a2b4-8597c15212e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.877 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 15cb070c-0f52-464f-a2b4-8597c15212e9 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.878 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.889 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[511cf011-9d9a-4689-aed4-efb88bc59a78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.890 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9001b9c-b1 in ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.893 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9001b9c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2af4a845-7bb7-4ba6-a46b-729baa1f0636]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.894 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f43e6da6-dd29-4be4-be67-819ffe46b60a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:04 np0005466030 systemd-udevd[304644]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:09:04 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:04Z|00802|binding|INFO|Setting lport 15cb070c-0f52-464f-a2b4-8597c15212e9 ovn-installed in OVS
Oct  2 09:09:04 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:04Z|00803|binding|INFO|Setting lport 15cb070c-0f52-464f-a2b4-8597c15212e9 up in Southbound
Oct  2 09:09:04 np0005466030 nova_compute[230518]: 2025-10-02 13:09:04.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:04 np0005466030 nova_compute[230518]: 2025-10-02 13:09:04.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:04 np0005466030 NetworkManager[44960]: <info>  [1759410544.9124] device (tap15cb070c-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:09:04 np0005466030 NetworkManager[44960]: <info>  [1759410544.9133] device (tap15cb070c-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:09:04 np0005466030 systemd-machined[188247]: New machine qemu-91-instance-000000c5.
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.915 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[e60a8cef-870c-4087-bcca-202fd75fabeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.929 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b30203b-c53d-4bb7-a268-5e6379d74b9f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:04 np0005466030 systemd[1]: Started Virtual Machine qemu-91-instance-000000c5.
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.960 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a3898d70-adac-4504-9530-d1e227860c8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.964 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6561bce1-e893-40d1-8559-6e1ad210f48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:04 np0005466030 NetworkManager[44960]: <info>  [1759410544.9665] manager: (tapd9001b9c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/372)
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.999 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3ed019-f1ae-4561-b526-07989629c77c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.002 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0b0cf7-6c95-4316-b540-5ad3169bcc5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 NetworkManager[44960]: <info>  [1759410545.0216] device (tapd9001b9c-b0): carrier: link connected
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.026 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c2370c64-e893-4efc-9664-4048230ec5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.043 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7fad2d11-cdf0-48ee-88a4-0ee848c2d0d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 31891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304677, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.059 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33996cda-39d1-4388-9cb7-da4b9382e44e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c08b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840857, 'tstamp': 840857}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304678, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.077 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d740edd5-ef9b-4c31-9309-04981fd6a44a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 31891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304679, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.103 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9665e2-976d-417a-8263-ea88ed190527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.149 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd4f881-36bf-40a5-adbc-f03905cdc177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.151 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.151 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:05 np0005466030 NetworkManager[44960]: <info>  [1759410545.1539] manager: (tapd9001b9c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct  2 09:09:05 np0005466030 kernel: tapd9001b9c-b0: entered promiscuous mode
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.156 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:05 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:05Z|00804|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.173 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.174 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[63743e40-1dd1-4e93-8ff3-a70fddffac09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.175 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-d9001b9c-bca6-4085-a954-1414269e31bc
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID d9001b9c-bca6-4085-a954-1414269e31bc
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:09:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.175 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'env', 'PROCESS_TAG=haproxy-d9001b9c-bca6-4085-a954-1414269e31bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9001b9c-bca6-4085-a954-1414269e31bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.186 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.187 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.187 2 DEBUG nova.network.neutron [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.279 2 DEBUG nova.compute.manager [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.279 2 DEBUG oslo_concurrency.lockutils [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.280 2 DEBUG oslo_concurrency.lockutils [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.280 2 DEBUG oslo_concurrency.lockutils [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.280 2 DEBUG nova.compute.manager [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Processing event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:09:05 np0005466030 podman[304748]: 2025-10-02 13:09:05.593762112 +0000 UTC m=+0.097283006 container create c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 09:09:05 np0005466030 podman[304748]: 2025-10-02 13:09:05.518692194 +0000 UTC m=+0.022213108 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.677 2 DEBUG nova.compute.manager [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.678 2 DEBUG nova.compute.manager [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.678 2 DEBUG oslo_concurrency.lockutils [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:05 np0005466030 systemd[1]: Started libpod-conmon-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003.scope.
Oct  2 09:09:05 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:09:05 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410f773b118732b26b6feca850b0977a2c84aeb1020cb4d6bcef409aa2a24707/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:09:05 np0005466030 podman[304748]: 2025-10-02 13:09:05.742937926 +0000 UTC m=+0.246458830 container init c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:09:05 np0005466030 podman[304748]: 2025-10-02 13:09:05.748638275 +0000 UTC m=+0.252159169 container start c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:09:05 np0005466030 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : New worker (304775) forked
Oct  2 09:09:05 np0005466030 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : Loading success.
Oct  2 09:09:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.980 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410545.9805005, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.982 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Started (Lifecycle Event)#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.984 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.989 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.992 2 INFO nova.virt.libvirt.driver [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance spawned successfully.#033[00m
Oct  2 09:09:05 np0005466030 nova_compute[230518]: 2025-10-02 13:09:05.992 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.027 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.031 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.035 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.035 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.036 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.036 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.036 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.037 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:06.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.070 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.071 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410545.98065, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.072 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.104 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.110 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410545.9878404, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.110 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.115 2 INFO nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.115 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.127 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.131 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.159 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.178 2 INFO nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 9.63 seconds to build instance.#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.202 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.293 2 DEBUG nova.network.neutron [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.320 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.321 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.322 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating image(s)#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.356 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.360 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.362 2 DEBUG oslo_concurrency.lockutils [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.362 2 DEBUG nova.network.neutron [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.404 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.431 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.435 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "866094531e2d8a23f188ebf1ca88baa9a196add2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.436 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "866094531e2d8a23f188ebf1ca88baa9a196add2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.722 2 DEBUG nova.virt.libvirt.imagebackend [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c7b69b23-2de1-4a42-80f6-94ba898e82eb/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c7b69b23-2de1-4a42-80f6-94ba898e82eb/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.771 2 DEBUG nova.virt.libvirt.imagebackend [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c7b69b23-2de1-4a42-80f6-94ba898e82eb/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.771 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] cloning images/c7b69b23-2de1-4a42-80f6-94ba898e82eb@snap to None/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  2 09:09:06 np0005466030 nova_compute[230518]: 2025-10-02 13:09:06.905 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "866094531e2d8a23f188ebf1ca88baa9a196add2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.032 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'migration_context' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.092 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] flattening vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.393 2 DEBUG nova.compute.manager [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG oslo_concurrency.lockutils [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG oslo_concurrency.lockutils [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG oslo_concurrency.lockutils [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG nova.compute.manager [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] No waiting events found dispatching network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.394 2 WARNING nova.compute.manager [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received unexpected event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.629 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Image rbd:vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.629 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.629 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Ensure instance console log exists: /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.630 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.630 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.630 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.632 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start _get_guest_xml network_info=[{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:08:37Z,direct_url=<?>,disk_format='raw',id=c7b69b23-2de1-4a42-80f6-94ba898e82eb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1341299623-shelved',owner='954946ff6b204fba90f767ec67210620',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.637 2 WARNING nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.642 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.642 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.645 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.645 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.646 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.647 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:08:37Z,direct_url=<?>,disk_format='raw',id=c7b69b23-2de1-4a42-80f6-94ba898e82eb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1341299623-shelved',owner='954946ff6b204fba90f767ec67210620',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.647 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.647 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.664 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:07.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.957 2 DEBUG nova.network.neutron [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.957 2 DEBUG nova.network.neutron [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:07 np0005466030 nova_compute[230518]: 2025-10-02 13:09:07.979 2 DEBUG oslo_concurrency.lockutils [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:08.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2569559011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.111 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.140 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.145 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3346000006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.570 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.571 2 DEBUG nova.virt.libvirt.vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='c7b69b23-2de1-4a42-80f6-94ba898e82eb',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member',shelved_at='2025-10-02T13:08:53.336076',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c7b69b23-2de1-4a42-80f6-94ba898e82eb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:03Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.572 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.573 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.574 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.587 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <uuid>b4640b6e-b1e0-4168-9970-c5d05a0e1621</uuid>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <name>instance-000000c1</name>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestShelveInstance-server-1341299623</nova:name>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:09:07</nova:creationTime>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:user uuid="62f4c4b5cc194bd59ca9cc9f1da78a79">tempest-TestShelveInstance-228669170-project-member</nova:user>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:project uuid="954946ff6b204fba90f767ec67210620">tempest-TestShelveInstance-228669170</nova:project>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="c7b69b23-2de1-4a42-80f6-94ba898e82eb"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <nova:port uuid="3994280c-c2c8-4fa7-bc48-f7b048d43015">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <entry name="serial">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <entry name="uuid">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:20:d8:5a"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <target dev="tap3994280c-c2"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log" append="off"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:09:08 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:09:08 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:09:08 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:09:08 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.588 2 DEBUG nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Preparing to wait for external event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.588 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.589 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.589 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.589 2 DEBUG nova.virt.libvirt.vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='c7b69b23-2de1-4a42-80f6-94ba898e82eb',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member',shelved_at='2025-10-02T13:08:53.336076',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c7b69b23-2de1-4a42-80f6-94ba898e82eb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:03Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.590 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.590 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.591 2 DEBUG os_vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3994280c-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3994280c-c2, col_values=(('external_ids', {'iface-id': '3994280c-c2c8-4fa7-bc48-f7b048d43015', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:d8:5a', 'vm-uuid': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:08 np0005466030 NetworkManager[44960]: <info>  [1759410548.5973] manager: (tap3994280c-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.603 2 INFO os_vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.659 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.660 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.660 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No VIF found with MAC fa:16:3e:20:d8:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.660 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Using config drive#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.686 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.709 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:08 np0005466030 nova_compute[230518]: 2025-10-02 13:09:08.789 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'keypairs' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:09 np0005466030 nova_compute[230518]: 2025-10-02 13:09:09.884 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating config drive at /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config#033[00m
Oct  2 09:09:09 np0005466030 nova_compute[230518]: 2025-10-02 13:09:09.889 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_o151y3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.025 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_o151y3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.058 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.061 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:10.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.255 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.256 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting local config drive /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config because it was imported into RBD.#033[00m
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466030 kernel: tap3994280c-c2: entered promiscuous mode
Oct  2 09:09:10 np0005466030 NetworkManager[44960]: <info>  [1759410550.3030] manager: (tap3994280c-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Oct  2 09:09:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:10Z|00805|binding|INFO|Claiming lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 for this chassis.
Oct  2 09:09:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:10Z|00806|binding|INFO|3994280c-c2c8-4fa7-bc48-f7b048d43015: Claiming fa:16:3e:20:d8:5a 10.100.0.6
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.314 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.317 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 bound to our chassis#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.319 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4223a8cc-f72a-428d-accb-3f4210096878#033[00m
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:10Z|00807|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 ovn-installed in OVS
Oct  2 09:09:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:10Z|00808|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 up in Southbound
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466030 systemd-udevd[305131]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.339 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[87761508-3dff-42e8-b17f-f5d32bf55bdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.339 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4223a8cc-f1 in ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:09:10 np0005466030 NetworkManager[44960]: <info>  [1759410550.3457] device (tap3994280c-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:09:10 np0005466030 NetworkManager[44960]: <info>  [1759410550.3469] device (tap3994280c-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.348 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4223a8cc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5559a6-6c4e-4647-ac5c-06f8b833ab57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4709ff67-1d8b-48aa-b258-d0978b438028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 systemd-machined[188247]: New machine qemu-92-instance-000000c1.
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.365 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3759fbef-2999-4f6d-bcb6-3a764429ed44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 systemd[1]: Started Virtual Machine qemu-92-instance-000000c1.
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.391 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83023efb-6929-4bac-9894-372a2d4b0a8e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.431 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7192e9-8474-48f3-85fe-5e9211b7bf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 systemd-udevd[305136]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:09:10 np0005466030 NetworkManager[44960]: <info>  [1759410550.4444] manager: (tap4223a8cc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.443 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8458220-76c5-4170-9062-2b00b04dd98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.486 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[efda7c95-9cfd-45c4-b5ac-164cab797f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.490 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34f49a6e-4ac7-4765-a472-ed9103eeaaf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 NetworkManager[44960]: <info>  [1759410550.5136] device (tap4223a8cc-f0): carrier: link connected
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.519 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f8081b45-d084-4cee-942c-2724608257d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.538 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e58b64e3-443f-48ca-859f-d17651cb1479]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841407, 'reachable_time': 26594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305166, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.557 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11110ff4-8e59-430f-b437-ae699eb2d9a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f568'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841407, 'tstamp': 841407}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305167, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.575 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdea8f6-7ae1-427b-9b5d-e64691ee6121]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841407, 'reachable_time': 26594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305168, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.611 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[435dba41-5cde-4706-ba5f-88ebe12c6087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.673 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81e7a7fd-0ddd-464a-9a43-862b8c2809c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.675 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.675 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4223a8cc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:10 np0005466030 NetworkManager[44960]: <info>  [1759410550.6781] manager: (tap4223a8cc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Oct  2 09:09:10 np0005466030 kernel: tap4223a8cc-f0: entered promiscuous mode
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.681 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4223a8cc-f0, col_values=(('external_ids', {'iface-id': '97eaefd1-ed23-4787-9782-741cd2cf7e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:10Z|00809|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct  2 09:09:10 np0005466030 nova_compute[230518]: 2025-10-02 13:09:10.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.711 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.712 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5cfa9969-ea0d-4af1-b5f6-0a7bcb86e740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.713 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-4223a8cc-f72a-428d-accb-3f4210096878
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 4223a8cc-f72a-428d-accb-3f4210096878
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:09:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.715 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'env', 'PROCESS_TAG=haproxy-4223a8cc-f72a-428d-accb-3f4210096878', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4223a8cc-f72a-428d-accb-3f4210096878.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:09:11 np0005466030 podman[305200]: 2025-10-02 13:09:11.078834229 +0000 UTC m=+0.059804288 container create 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:09:11 np0005466030 systemd[1]: Started libpod-conmon-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd.scope.
Oct  2 09:09:11 np0005466030 podman[305200]: 2025-10-02 13:09:11.039873666 +0000 UTC m=+0.020843725 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:09:11 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:09:11 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757739b03a8fcb122583c99c39ed2c8eb5489f21e1732e8148a0278838977fc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:09:11 np0005466030 podman[305200]: 2025-10-02 13:09:11.279239962 +0000 UTC m=+0.260210051 container init 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:09:11 np0005466030 podman[305200]: 2025-10-02 13:09:11.286394717 +0000 UTC m=+0.267364776 container start 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:09:11 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : New worker (305255) forked
Oct  2 09:09:11 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : Loading success.
Oct  2 09:09:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.869 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410551.8691044, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.870 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Started (Lifecycle Event)#033[00m
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.902 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.907 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410551.870403, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.907 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.930 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.934 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:09:11 np0005466030 nova_compute[230518]: 2025-10-02 13:09:11.955 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:09:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:12.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.371 2 DEBUG nova.compute.manager [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.372 2 DEBUG oslo_concurrency.lockutils [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.372 2 DEBUG oslo_concurrency.lockutils [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.372 2 DEBUG oslo_concurrency.lockutils [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.373 2 DEBUG nova.compute.manager [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Processing event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.373 2 DEBUG nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.376 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410553.3766344, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.377 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.378 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.382 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance spawned successfully.#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.406 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.410 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.437 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:09:13 np0005466030 nova_compute[230518]: 2025-10-02 13:09:13.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:13.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:14.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Oct  2 09:09:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.243 2 DEBUG nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.307 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.536 2 DEBUG nova.compute.manager [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.537 2 DEBUG oslo_concurrency.lockutils [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.538 2 DEBUG oslo_concurrency.lockutils [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.538 2 DEBUG oslo_concurrency.lockutils [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.539 2 DEBUG nova.compute.manager [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.539 2 WARNING nova.compute.manager [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.735 2 DEBUG nova.compute.manager [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.735 2 DEBUG nova.compute.manager [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing instance network info cache due to event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.736 2 DEBUG oslo_concurrency.lockutils [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.736 2 DEBUG oslo_concurrency.lockutils [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:15 np0005466030 nova_compute[230518]: 2025-10-02 13:09:15.736 2 DEBUG nova.network.neutron [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:16 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:16Z|00810|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:09:16 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:16Z|00811|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct  2 09:09:16 np0005466030 nova_compute[230518]: 2025-10-02 13:09:16.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:17 np0005466030 nova_compute[230518]: 2025-10-02 13:09:17.304 2 DEBUG nova.network.neutron [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated VIF entry in instance network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:17 np0005466030 nova_compute[230518]: 2025-10-02 13:09:17.305 2 DEBUG nova.network.neutron [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:17 np0005466030 nova_compute[230518]: 2025-10-02 13:09:17.344 2 DEBUG oslo_concurrency.lockutils [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267789736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:18 np0005466030 nova_compute[230518]: 2025-10-02 13:09:18.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:19.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:19 np0005466030 nova_compute[230518]: 2025-10-02 13:09:19.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:20 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:20Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:47:21 10.100.0.3
Oct  2 09:09:20 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:20Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:47:21 10.100.0.3
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.191 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.191 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.209 2 DEBUG nova.objects.instance [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.270 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.453 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.453 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.454 2 INFO nova.compute.manager [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attaching volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 to /dev/vdb#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.621 2 DEBUG os_brick.utils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.622 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.631 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.632 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[532d4b15-43a4-4993-85b4-3d8640363a57]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.633 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.639 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.639 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[91c6fb38-0679-4170-ab13-d0fbd21382f1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.640 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.647 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.648 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f14c8433-3ab7-45be-8c0e-7c3e2dbf8cf7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.649 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0be3e92d-c162-4107-a11c-bed66dd9a5ac]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.650 2 DEBUG oslo_concurrency.processutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.681 2 DEBUG oslo_concurrency.processutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.684 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.684 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.684 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.685 2 DEBUG os_brick.utils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:09:20 np0005466030 nova_compute[230518]: 2025-10-02 13:09:20.685 2 DEBUG nova.virt.block_device [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating existing volume attachment record: f6d5886f-7bf8-455c-85a2-e2c058fd585c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.539 2 DEBUG nova.objects.instance [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.562 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attempting to attach volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.564 2 DEBUG nova.virt.libvirt.guest [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct  2 09:09:21 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 09:09:21 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  </auth>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct  2 09:09:21 np0005466030 nova_compute[230518]:  <shareable/>
Oct  2 09:09:21 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:09:21 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.681 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.682 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.682 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.682 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:e2:47:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:09:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:21 np0005466030 nova_compute[230518]: 2025-10-02 13:09:21.880 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:22.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:23 np0005466030 nova_compute[230518]: 2025-10-02 13:09:23.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:23 np0005466030 podman[305300]: 2025-10-02 13:09:23.794330921 +0000 UTC m=+0.049682171 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:09:23 np0005466030 podman[305299]: 2025-10-02 13:09:23.827002237 +0000 UTC m=+0.082353557 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:09:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:23.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:24.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1465649545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.624 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.693 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.694 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.699 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.852 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.853 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3811MB free_disk=20.851551055908203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.853 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.853 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.947 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.948 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance b4640b6e-b1e0-4168-9970-c5d05a0e1621 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.948 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:09:24 np0005466030 nova_compute[230518]: 2025-10-02 13:09:24.948 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.024 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.347 2 DEBUG oslo_concurrency.lockutils [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.348 2 DEBUG oslo_concurrency.lockutils [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.359 2 INFO nova.compute.manager [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Detaching volume 2341c515-f8fa-4cdf-87e9-1faa534d8307#033[00m
Oct  2 09:09:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3051948380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.449 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.454 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.466 2 INFO nova.virt.block_device [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attempting to driver detach volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 from mountpoint /dev/vdb#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.474 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.479 2 DEBUG nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Attempting to detach device vdb from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.480 2 DEBUG nova.virt.libvirt.guest [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct  2 09:09:25 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <shareable/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:09:25 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.486 2 INFO nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the persistent domain config.#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.487 2 DEBUG nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.487 2 DEBUG nova.virt.libvirt.guest [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct  2 09:09:25 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <shareable/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:09:25 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:09:25 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.502 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.503 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.608 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759410565.6075559, 658821a7-5b97-43ad-8fe2-46e5303cf56c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.609 2 DEBUG nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 658821a7-5b97-43ad-8fe2-46e5303cf56c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.611 2 INFO nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the live domain config.#033[00m
Oct  2 09:09:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.902 2 DEBUG nova.objects.instance [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:25 np0005466030 nova_compute[230518]: 2025-10-02 13:09:25.938 2 DEBUG oslo_concurrency.lockutils [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:25.964 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:25 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:25Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:d8:5a 10.100.0.6
Oct  2 09:09:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:26.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:27.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:28.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:28 np0005466030 nova_compute[230518]: 2025-10-02 13:09:28.502 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:28 np0005466030 nova_compute[230518]: 2025-10-02 13:09:28.503 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:28 np0005466030 nova_compute[230518]: 2025-10-02 13:09:28.503 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:09:28 np0005466030 nova_compute[230518]: 2025-10-02 13:09:28.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:29 np0005466030 nova_compute[230518]: 2025-10-02 13:09:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:29 np0005466030 nova_compute[230518]: 2025-10-02 13:09:29.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:29.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:30 np0005466030 nova_compute[230518]: 2025-10-02 13:09:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:30.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:30 np0005466030 nova_compute[230518]: 2025-10-02 13:09:30.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:32 np0005466030 nova_compute[230518]: 2025-10-02 13:09:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:32.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.226 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.227 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.227 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.227 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:33 np0005466030 nova_compute[230518]: 2025-10-02 13:09:33.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:33.701 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:33 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:33.704 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:09:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.079 2 DEBUG nova.compute.manager [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.079 2 DEBUG nova.compute.manager [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.080 2 DEBUG oslo_concurrency.lockutils [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.080 2 DEBUG oslo_concurrency.lockutils [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.080 2 DEBUG nova.network.neutron [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:34.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.144 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.145 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.145 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.146 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.146 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.147 2 INFO nova.compute.manager [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Terminating instance#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.148 2 DEBUG nova.compute.manager [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:09:34 np0005466030 kernel: tap3994280c-c2 (unregistering): left promiscuous mode
Oct  2 09:09:34 np0005466030 NetworkManager[44960]: <info>  [1759410574.2166] device (tap3994280c-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:09:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:34Z|00812|binding|INFO|Releasing lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 from this chassis (sb_readonly=0)
Oct  2 09:09:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:34Z|00813|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 down in Southbound
Oct  2 09:09:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:34Z|00814|binding|INFO|Removing iface tap3994280c-c2 ovn-installed in OVS
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.245 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.246 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 unbound from our chassis#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.248 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4223a8cc-f72a-428d-accb-3f4210096878, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.251 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f8fd35-08cc-49eb-bf01-75b12f44b0cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.255 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace which is not needed anymore#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466030 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Oct  2 09:09:34 np0005466030 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c1.scope: Consumed 14.504s CPU time.
Oct  2 09:09:34 np0005466030 systemd-machined[188247]: Machine qemu-92-instance-000000c1 terminated.
Oct  2 09:09:34 np0005466030 podman[305392]: 2025-10-02 13:09:34.35437066 +0000 UTC m=+0.101239680 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:09:34 np0005466030 podman[305395]: 2025-10-02 13:09:34.356512097 +0000 UTC m=+0.070697960 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.392 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.393 2 DEBUG nova.objects.instance [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'resources' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.408 2 DEBUG nova.virt.libvirt.vif [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:09:15Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.408 2 DEBUG nova.network.os_vif_util [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.409 2 DEBUG nova.network.os_vif_util [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.409 2 DEBUG os_vif [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : haproxy version is 2.8.14-c23fe91
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3994280c-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:34 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : path to executable is /usr/sbin/haproxy
Oct  2 09:09:34 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [WARNING]  (305244) : Exiting Master process...
Oct  2 09:09:34 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [WARNING]  (305244) : Exiting Master process...
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [ALERT]    (305244) : Current worker (305255) exited with code 143 (Terminated)
Oct  2 09:09:34 np0005466030 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [WARNING]  (305244) : All workers exited. Exiting... (0)
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.418 2 INFO os_vif [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')#033[00m
Oct  2 09:09:34 np0005466030 systemd[1]: libpod-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd.scope: Deactivated successfully.
Oct  2 09:09:34 np0005466030 podman[305454]: 2025-10-02 13:09:34.424943866 +0000 UTC m=+0.053254074 container died 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:09:34 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd-userdata-shm.mount: Deactivated successfully.
Oct  2 09:09:34 np0005466030 systemd[1]: var-lib-containers-storage-overlay-757739b03a8fcb122583c99c39ed2c8eb5489f21e1732e8148a0278838977fc9-merged.mount: Deactivated successfully.
Oct  2 09:09:34 np0005466030 podman[305454]: 2025-10-02 13:09:34.4760149 +0000 UTC m=+0.104325118 container cleanup 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:09:34 np0005466030 systemd[1]: libpod-conmon-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd.scope: Deactivated successfully.
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.538 2 DEBUG nova.compute.manager [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.538 2 DEBUG oslo_concurrency.lockutils [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.538 2 DEBUG oslo_concurrency.lockutils [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.539 2 DEBUG oslo_concurrency.lockutils [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.539 2 DEBUG nova.compute.manager [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.539 2 DEBUG nova.compute.manager [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:09:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:34 np0005466030 podman[305516]: 2025-10-02 13:09:34.562374872 +0000 UTC m=+0.055743322 container remove 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.569 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8a74e7ca-f835-448e-a24b-411d7f07aaff]: (4, ('Thu Oct  2 01:09:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd)\n85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd\nThu Oct  2 01:09:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd)\n85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.572 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a843af5f-3c69-4390-824f-fc3e77cd3028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.573 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466030 kernel: tap4223a8cc-f0: left promiscuous mode
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.600 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e96f9594-a76d-43c7-bf08-d3bcbf3e996b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.636 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac072570-66ff-4fb0-b6d2-6f84d0521c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.639 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[019f4f40-d33d-4113-8caf-5fdcff7aa4b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.668 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[421985fe-2255-4522-b72c-05482f743332]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841398, 'reachable_time': 20838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305531, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.673 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:09:34 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.673 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2d4dd0-4d6d-4501-9600-03b478127433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:34 np0005466030 systemd[1]: run-netns-ovnmeta\x2d4223a8cc\x2df72a\x2d428d\x2daccb\x2d3f4210096878.mount: Deactivated successfully.
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.929 2 INFO nova.virt.libvirt.driver [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting instance files /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.931 2 INFO nova.virt.libvirt.driver [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deletion of /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del complete#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.989 2 INFO nova.compute.manager [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.990 2 DEBUG oslo.service.loopingcall [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.990 2 DEBUG nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:09:34 np0005466030 nova_compute[230518]: 2025-10-02 13:09:34.990 2 DEBUG nova.network.neutron [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.282 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.308 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.308 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.309 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.347412) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575347471, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 749, "num_deletes": 258, "total_data_size": 1245937, "memory_usage": 1264608, "flush_reason": "Manual Compaction"}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575354789, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 821686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69644, "largest_seqno": 70388, "table_properties": {"data_size": 818091, "index_size": 1374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8425, "raw_average_key_size": 19, "raw_value_size": 810724, "raw_average_value_size": 1850, "num_data_blocks": 61, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410531, "oldest_key_time": 1759410531, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 7422 microseconds, and 4060 cpu microseconds.
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.354841) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 821686 bytes OK
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.354864) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.356476) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.356489) EVENT_LOG_v1 {"time_micros": 1759410575356484, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.356511) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 1241900, prev total WAL file size 1241900, number of live WAL files 2.
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.357111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353136' seq:72057594037927935, type:22 .. '6C6F676D0032373638' seq:0, type:0; will stop at (end)
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(802KB)], [141(10174KB)]
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575357150, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 11239967, "oldest_snapshot_seqno": -1}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9081 keys, 11112653 bytes, temperature: kUnknown
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575438190, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 11112653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11055223, "index_size": 33637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 239918, "raw_average_key_size": 26, "raw_value_size": 10897216, "raw_average_value_size": 1200, "num_data_blocks": 1279, "num_entries": 9081, "num_filter_entries": 9081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.438764) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11112653 bytes
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.440584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.3 rd, 136.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(27.2) write-amplify(13.5) OK, records in: 9611, records dropped: 530 output_compression: NoCompression
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.440619) EVENT_LOG_v1 {"time_micros": 1759410575440601, "job": 90, "event": "compaction_finished", "compaction_time_micros": 81247, "compaction_time_cpu_micros": 27910, "output_level": 6, "num_output_files": 1, "total_output_size": 11112653, "num_input_records": 9611, "num_output_records": 9081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575441146, "job": 90, "event": "table_file_deletion", "file_number": 143}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575445101, "job": 90, "event": "table_file_deletion", "file_number": 141}
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.357045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:09:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:35.706 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.730 2 DEBUG nova.network.neutron [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.748 2 INFO nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.821 2 DEBUG nova.compute.manager [req-958775d6-0470-4a49-ac1c-e1984520b1be req-767d7e55-b515-49e0-8fad-47e623662540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-deleted-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.883 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.883 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:35.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:35 np0005466030 nova_compute[230518]: 2025-10-02 13:09:35.959 2 DEBUG oslo_concurrency.processutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.033 2 DEBUG nova.network.neutron [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.034 2 DEBUG nova.network.neutron [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.059 2 DEBUG oslo_concurrency.lockutils [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:36.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1801784853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.453 2 DEBUG oslo_concurrency.processutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.461 2 DEBUG nova.compute.provider_tree [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.525 2 DEBUG nova.scheduler.client.report [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.666 2 DEBUG nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.667 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.667 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.668 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.668 2 DEBUG nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.669 2 WARNING nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.710 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:36 np0005466030 nova_compute[230518]: 2025-10-02 13:09:36.873 2 INFO nova.scheduler.client.report [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Deleted allocations for instance b4640b6e-b1e0-4168-9970-c5d05a0e1621#033[00m
Oct  2 09:09:37 np0005466030 nova_compute[230518]: 2025-10-02 13:09:37.289 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:38.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:39 np0005466030 nova_compute[230518]: 2025-10-02 13:09:39.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:40.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:40 np0005466030 nova_compute[230518]: 2025-10-02 13:09:40.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:42.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:09:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:09:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:44.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:44 np0005466030 nova_compute[230518]: 2025-10-02 13:09:44.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:45 np0005466030 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG nova.compute.manager [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:45 np0005466030 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG nova.compute.manager [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing instance network info cache due to event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:45 np0005466030 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG oslo_concurrency.lockutils [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:45 np0005466030 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG oslo_concurrency.lockutils [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:45 np0005466030 nova_compute[230518]: 2025-10-02 13:09:45.060 2 DEBUG nova.network.neutron [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:45 np0005466030 nova_compute[230518]: 2025-10-02 13:09:45.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:09:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:46.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:09:47 np0005466030 nova_compute[230518]: 2025-10-02 13:09:47.533 2 DEBUG nova.network.neutron [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated VIF entry in instance network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:47 np0005466030 nova_compute[230518]: 2025-10-02 13:09:47.534 2 DEBUG nova.network.neutron [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:47 np0005466030 nova_compute[230518]: 2025-10-02 13:09:47.583 2 DEBUG oslo_concurrency.lockutils [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:47.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:48.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.549 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.550 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.565 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.647 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.648 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.655 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.656 2 INFO nova.compute.claims [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:09:48 np0005466030 nova_compute[230518]: 2025-10-02 13:09:48.779 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:09:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/977708829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.214 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.220 2 DEBUG nova.compute.provider_tree [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.245 2 DEBUG nova.scheduler.client.report [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.274 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.276 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.336 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.337 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.390 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410574.388958, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.391 2 INFO nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.393 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.425 2 DEBUG nova.compute.manager [None req-50d4713b-70b7-4692-9594-0693ab21d92e - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.426 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.503 2 DEBUG nova.policy [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.529 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.530 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.530 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Creating image(s)#033[00m
Oct  2 09:09:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.559 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.590 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.616 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.620 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.692 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.693 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.694 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.694 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.717 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:49 np0005466030 nova_compute[230518]: 2025-10-02 13:09:49.720 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 de995ad8-07bb-4097-899b-5c79d62a1f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:09:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:49.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.167 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 de995ad8-07bb-4097-899b-5c79d62a1f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.251 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] resizing rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.295 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Successfully created port: 513c3d66-613d-4626-8ab0-58520113de32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.375 2 DEBUG nova.objects.instance [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.394 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.394 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Ensure instance console log exists: /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.395 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.395 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:50 np0005466030 nova_compute[230518]: 2025-10-02 13:09:50.396 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.188 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Successfully updated port: 513c3d66-613d-4626-8ab0-58520113de32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.202 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.203 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.203 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.374 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.523 2 DEBUG nova.compute.manager [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.523 2 DEBUG nova.compute.manager [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:09:51 np0005466030 nova_compute[230518]: 2025-10-02 13:09:51.524 2 DEBUG oslo_concurrency.lockutils [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:09:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:52.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.305 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.326 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.326 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance network_info: |[{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.327 2 DEBUG oslo_concurrency.lockutils [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.327 2 DEBUG nova.network.neutron [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.331 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start _get_guest_xml network_info=[{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.336 2 WARNING nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.342 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.342 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.347 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.347 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.349 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.349 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.350 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.350 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.352 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.352 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.352 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.353 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.356 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3761765269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.829 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.858 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:52 np0005466030 nova_compute[230518]: 2025-10-02 13:09:52.862 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:09:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2259096278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.326 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.327 2 DEBUG nova.virt.libvirt.vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.328 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.329 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.330 2 DEBUG nova.objects.instance [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.360 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <uuid>de995ad8-07bb-4097-899b-5c79d62a1f4c</uuid>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <name>instance-000000c8</name>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <nova:name>multiattach-server-1</nova:name>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:09:52</nova:creationTime>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <nova:port uuid="513c3d66-613d-4626-8ab0-58520113de32">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <entry name="serial">de995ad8-07bb-4097-899b-5c79d62a1f4c</entry>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <entry name="uuid">de995ad8-07bb-4097-899b-5c79d62a1f4c</entry>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/de995ad8-07bb-4097-899b-5c79d62a1f4c_disk">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:9a:bc:4e"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <target dev="tap513c3d66-61"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/console.log" append="off"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:09:53 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:09:53 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:09:53 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:09:53 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.362 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Preparing to wait for external event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.363 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.363 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.363 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.364 2 DEBUG nova.virt.libvirt.vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.364 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.365 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.366 2 DEBUG os_vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap513c3d66-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap513c3d66-61, col_values=(('external_ids', {'iface-id': '513c3d66-613d-4626-8ab0-58520113de32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:bc:4e', 'vm-uuid': 'de995ad8-07bb-4097-899b-5c79d62a1f4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:53 np0005466030 NetworkManager[44960]: <info>  [1759410593.3747] manager: (tap513c3d66-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.381 2 INFO os_vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.452 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.452 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.453 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:9a:bc:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.453 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Using config drive#033[00m
Oct  2 09:09:53 np0005466030 nova_compute[230518]: 2025-10-02 13:09:53.491 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:53.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:54.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.294 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Creating config drive at /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.301 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplurrrzzg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.339 2 DEBUG nova.network.neutron [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.340 2 DEBUG nova.network.neutron [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.359 2 DEBUG oslo_concurrency.lockutils [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.450 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplurrrzzg" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.476 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.480 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:09:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.786 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.787 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Deleting local config drive /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config because it was imported into RBD.#033[00m
Oct  2 09:09:54 np0005466030 podman[306170]: 2025-10-02 13:09:54.840177489 +0000 UTC m=+0.075518572 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:09:54 np0005466030 kernel: tap513c3d66-61: entered promiscuous mode
Oct  2 09:09:54 np0005466030 NetworkManager[44960]: <info>  [1759410594.8503] manager: (tap513c3d66-61): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Oct  2 09:09:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:54Z|00815|binding|INFO|Claiming lport 513c3d66-613d-4626-8ab0-58520113de32 for this chassis.
Oct  2 09:09:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:54Z|00816|binding|INFO|513c3d66-613d-4626-8ab0-58520113de32: Claiming fa:16:3e:9a:bc:4e 10.100.0.4
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.862 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:bc:4e 10.100.0.4'], port_security=['fa:16:3e:9a:bc:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'de995ad8-07bb-4097-899b-5c79d62a1f4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=513c3d66-613d-4626-8ab0-58520113de32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.863 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 513c3d66-613d-4626-8ab0-58520113de32 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.865 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:09:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:54Z|00817|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 ovn-installed in OVS
Oct  2 09:09:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:09:54Z|00818|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 up in Southbound
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:54 np0005466030 nova_compute[230518]: 2025-10-02 13:09:54.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:54 np0005466030 podman[306169]: 2025-10-02 13:09:54.884236883 +0000 UTC m=+0.116986786 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.885 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d26ff21-0945-4f3f-9b8f-a8e87e0d7f9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:54 np0005466030 systemd-udevd[306223]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:09:54 np0005466030 systemd-machined[188247]: New machine qemu-93-instance-000000c8.
Oct  2 09:09:54 np0005466030 NetworkManager[44960]: <info>  [1759410594.9053] device (tap513c3d66-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:09:54 np0005466030 NetworkManager[44960]: <info>  [1759410594.9059] device (tap513c3d66-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:09:54 np0005466030 systemd[1]: Started Virtual Machine qemu-93-instance-000000c8.
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.922 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7f72bbd4-b001-43a8-bd86-8e39d3e0ec14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.925 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[20a05a50-73c0-401c-af9e-659ea9b30229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.953 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fe364483-98fd-4c67-b98e-0f14c0f46d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.969 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5933e86e-3117-438b-a757-4512527d2073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 19345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306236, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.986 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de5f0608-01f7-4250-bfa0-a9c5e9524d16]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840868, 'tstamp': 840868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306237, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840870, 'tstamp': 840870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306237, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:09:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.988 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:55 np0005466030 nova_compute[230518]: 2025-10-02 13:09:55.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.023 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.023 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.023 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:09:55 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.024 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:09:55 np0005466030 nova_compute[230518]: 2025-10-02 13:09:55.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:55 np0005466030 nova_compute[230518]: 2025-10-02 13:09:55.425 2 DEBUG nova.compute.manager [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:55 np0005466030 nova_compute[230518]: 2025-10-02 13:09:55.426 2 DEBUG oslo_concurrency.lockutils [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:55 np0005466030 nova_compute[230518]: 2025-10-02 13:09:55.426 2 DEBUG oslo_concurrency.lockutils [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:55 np0005466030 nova_compute[230518]: 2025-10-02 13:09:55.426 2 DEBUG oslo_concurrency.lockutils [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:55 np0005466030 nova_compute[230518]: 2025-10-02 13:09:55.427 2 DEBUG nova.compute.manager [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Processing event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:09:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:55.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.030 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410596.0298114, de995ad8-07bb-4097-899b-5c79d62a1f4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.030 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Started (Lifecycle Event)#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.032 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.035 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.037 2 INFO nova.virt.libvirt.driver [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance spawned successfully.#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.037 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.058 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.062 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.062 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.063 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.063 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.063 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.064 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.067 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.097 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.098 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410596.0308108, de995ad8-07bb-4097-899b-5c79d62a1f4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.098 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.119 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.122 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410596.0343573, de995ad8-07bb-4097-899b-5c79d62a1f4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.122 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:09:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:56.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.130 2 INFO nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Took 6.60 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.131 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.139 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.142 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.162 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.189 2 INFO nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Took 7.57 seconds to build instance.#033[00m
Oct  2 09:09:56 np0005466030 nova_compute[230518]: 2025-10-02 13:09:56.203 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:57 np0005466030 nova_compute[230518]: 2025-10-02 13:09:57.549 2 DEBUG nova.compute.manager [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:09:57 np0005466030 nova_compute[230518]: 2025-10-02 13:09:57.549 2 DEBUG oslo_concurrency.lockutils [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:09:57 np0005466030 nova_compute[230518]: 2025-10-02 13:09:57.550 2 DEBUG oslo_concurrency.lockutils [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:09:57 np0005466030 nova_compute[230518]: 2025-10-02 13:09:57.550 2 DEBUG oslo_concurrency.lockutils [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:09:57 np0005466030 nova_compute[230518]: 2025-10-02 13:09:57.550 2 DEBUG nova.compute.manager [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:09:57 np0005466030 nova_compute[230518]: 2025-10-02 13:09:57.550 2 WARNING nova.compute.manager [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:09:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:57.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:09:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:58.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:09:58 np0005466030 nova_compute[230518]: 2025-10-02 13:09:58.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:09:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:09:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:09:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:09:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:00.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:00 np0005466030 nova_compute[230518]: 2025-10-02 13:10:00.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 09:10:01 np0005466030 nova_compute[230518]: 2025-10-02 13:10:01.466 2 DEBUG nova.compute.manager [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:01 np0005466030 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG nova.compute.manager [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:01 np0005466030 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG oslo_concurrency.lockutils [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:01 np0005466030 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG oslo_concurrency.lockutils [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:01 np0005466030 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG nova.network.neutron [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3483817046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:03 np0005466030 nova_compute[230518]: 2025-10-02 13:10:03.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:03 np0005466030 nova_compute[230518]: 2025-10-02 13:10:03.937 2 DEBUG nova.network.neutron [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:03 np0005466030 nova_compute[230518]: 2025-10-02 13:10:03.938 2 DEBUG nova.network.neutron [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:03 np0005466030 nova_compute[230518]: 2025-10-02 13:10:03.969 2 DEBUG oslo_concurrency.lockutils [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:04.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:04 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct  2 09:10:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:04 np0005466030 nova_compute[230518]: 2025-10-02 13:10:04.656 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:04 np0005466030 nova_compute[230518]: 2025-10-02 13:10:04.656 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:04 np0005466030 nova_compute[230518]: 2025-10-02 13:10:04.676 2 DEBUG nova.objects.instance [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:04 np0005466030 nova_compute[230518]: 2025-10-02 13:10:04.764 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:04 np0005466030 podman[306281]: 2025-10-02 13:10:04.820708489 +0000 UTC m=+0.068121199 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:10:04 np0005466030 podman[306282]: 2025-10-02 13:10:04.821747002 +0000 UTC m=+0.068897374 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.000 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.001 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.001 2 INFO nova.compute.manager [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Attaching volume 8347daf9-f32f-4c50-b89e-df9e913044db to /dev/vdb#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.290 2 DEBUG os_brick.utils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.293 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.309 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.309 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[501b2869-3eb8-4229-b4e0-a129f79ded04]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.313 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.323 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.325 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f4dae2-f967-4671-971e-cb003e5ae290]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.329 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.340 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.340 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b44999-9a3e-4051-a3ed-81e8c697e302]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.343 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[68c67cca-bb6d-4855-9326-357b12507bbb]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.344 2 DEBUG oslo_concurrency.processutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.401 2 DEBUG oslo_concurrency.processutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.404 2 DEBUG os_brick.initiator.connectors.lightos [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.404 2 DEBUG os_brick.initiator.connectors.lightos [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.404 2 DEBUG os_brick.initiator.connectors.lightos [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.405 2 DEBUG os_brick.utils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (113ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:10:05 np0005466030 nova_compute[230518]: 2025-10-02 13:10:05.405 2 DEBUG nova.virt.block_device [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating existing volume attachment record: 090235a9-9281-4043-bb90-ab5bad31a26e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:10:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.280 2 DEBUG nova.objects.instance [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.304 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Attempting to attach volume 8347daf9-f32f-4c50-b89e-df9e913044db with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.307 2 DEBUG nova.virt.libvirt.guest [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct  2 09:10:06 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 09:10:06 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  </auth>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct  2 09:10:06 np0005466030 nova_compute[230518]:  <shareable/>
Oct  2 09:10:06 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:10:06 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.476 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.477 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.477 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.477 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:9a:bc:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:10:06 np0005466030 nova_compute[230518]: 2025-10-02 13:10:06.673 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:07.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.591 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.591 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.609 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.690 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.691 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.702 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.703 2 INFO nova.compute.claims [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:10:08 np0005466030 nova_compute[230518]: 2025-10-02 13:10:08.987 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/918586300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.401 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.409 2 DEBUG nova.compute.provider_tree [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.429 2 DEBUG nova.scheduler.client.report [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.459 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.460 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.501 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.501 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.523 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.552 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:10:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.650 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.651 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.651 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Creating image(s)#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.672 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.701 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.725 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.728 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.796 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.796 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.797 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.797 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.824 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.828 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:09 np0005466030 nova_compute[230518]: 2025-10-02 13:10:09.919 2 DEBUG nova.policy [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:10:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:09.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:10Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:bc:4e 10.100.0.4
Oct  2 09:10:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:10Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:bc:4e 10.100.0.4
Oct  2 09:10:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.160 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.246 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.372 2 DEBUG nova.objects.instance [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 28cee0c6-0008-45f8-af11-48abbbbcb22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.387 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.387 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Ensure instance console log exists: /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.388 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.388 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:10 np0005466030 nova_compute[230518]: 2025-10-02 13:10:10.389 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:11 np0005466030 nova_compute[230518]: 2025-10-02 13:10:11.306 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Successfully created port: faebc160-66b6-4ba2-ab02-2b0098eef804 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:10:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:11.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.253 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Successfully updated port: faebc160-66b6-4ba2-ab02-2b0098eef804 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.266 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.266 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.267 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.366 2 DEBUG nova.compute.manager [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.367 2 DEBUG nova.compute.manager [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing instance network info cache due to event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.367 2 DEBUG oslo_concurrency.lockutils [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:12 np0005466030 nova_compute[230518]: 2025-10-02 13:10:12.432 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.589 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.609 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.609 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance network_info: |[{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.609 2 DEBUG oslo_concurrency.lockutils [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.610 2 DEBUG nova.network.neutron [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.615 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start _get_guest_xml network_info=[{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.621 2 WARNING nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.630 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.631 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.635 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.635 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.637 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.638 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.639 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.639 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.639 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.640 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.640 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.640 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.641 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.641 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.641 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.642 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:10:13 np0005466030 nova_compute[230518]: 2025-10-02 13:10:13.646 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:13.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1576777115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.080 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.104 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.107 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:10:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2097443980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.554 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.555 2 DEBUG nova.virt.libvirt.vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=202,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-x8b2athb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:09Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=28cee0c6-0008-45f8-af11-48abbbbcb22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.556 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.556 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.557 2 DEBUG nova.objects.instance [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28cee0c6-0008-45f8-af11-48abbbbcb22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.575 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <uuid>28cee0c6-0008-45f8-af11-48abbbbcb22c</uuid>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <name>instance-000000ca</name>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808</nova:name>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:10:13</nova:creationTime>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <nova:port uuid="faebc160-66b6-4ba2-ab02-2b0098eef804">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <entry name="serial">28cee0c6-0008-45f8-af11-48abbbbcb22c</entry>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <entry name="uuid">28cee0c6-0008-45f8-af11-48abbbbcb22c</entry>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/28cee0c6-0008-45f8-af11-48abbbbcb22c_disk">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:4f:46:ff"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <target dev="tapfaebc160-66"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/console.log" append="off"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:10:14 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:10:14 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:10:14 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:10:14 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.577 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Preparing to wait for external event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.577 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.578 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.578 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.579 2 DEBUG nova.virt.libvirt.vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=202,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-x8b2athb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:09Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=28cee0c6-0008-45f8-af11-48abbbbcb22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.579 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.580 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.581 2 DEBUG os_vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaebc160-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaebc160-66, col_values=(('external_ids', {'iface-id': 'faebc160-66b6-4ba2-ab02-2b0098eef804', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:46:ff', 'vm-uuid': '28cee0c6-0008-45f8-af11-48abbbbcb22c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:14 np0005466030 NetworkManager[44960]: <info>  [1759410614.5887] manager: (tapfaebc160-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.599 2 INFO os_vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66')#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.675 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.677 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.677 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:4f:46:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.678 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Using config drive#033[00m
Oct  2 09:10:14 np0005466030 nova_compute[230518]: 2025-10-02 13:10:14.704 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.271 2 DEBUG nova.network.neutron [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updated VIF entry in instance network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.272 2 DEBUG nova.network.neutron [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.291 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Creating config drive at /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.297 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzdq4t_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.341 2 DEBUG oslo_concurrency.lockutils [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.447 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzdq4t_x" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.479 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.485 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.711 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.714 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deleting local config drive /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config because it was imported into RBD.#033[00m
Oct  2 09:10:15 np0005466030 kernel: tapfaebc160-66: entered promiscuous mode
Oct  2 09:10:15 np0005466030 NetworkManager[44960]: <info>  [1759410615.7750] manager: (tapfaebc160-66): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Oct  2 09:10:15 np0005466030 systemd-udevd[306666]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:10:15 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:15Z|00819|binding|INFO|Claiming lport faebc160-66b6-4ba2-ab02-2b0098eef804 for this chassis.
Oct  2 09:10:15 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:15Z|00820|binding|INFO|faebc160-66b6-4ba2-ab02-2b0098eef804: Claiming fa:16:3e:4f:46:ff 10.100.0.14
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.845 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:46:ff 10.100.0.14'], port_security=['fa:16:3e:4f:46:ff 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28cee0c6-0008-45f8-af11-48abbbbcb22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54a08602-f5b6-41e1-816c-2c122542a2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e682aa5-16aa-4884-9ae4-6ca813b9baae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1398b7fe-9cb8-4053-9d9c-0523007b5e96, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=faebc160-66b6-4ba2-ab02-2b0098eef804) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.846 138374 INFO neutron.agent.ovn.metadata.agent [-] Port faebc160-66b6-4ba2-ab02-2b0098eef804 in datapath 54a08602-f5b6-41e1-816c-2c122542a2b7 bound to our chassis#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.847 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54a08602-f5b6-41e1-816c-2c122542a2b7#033[00m
Oct  2 09:10:15 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:15Z|00821|binding|INFO|Setting lport faebc160-66b6-4ba2-ab02-2b0098eef804 up in Southbound
Oct  2 09:10:15 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:15Z|00822|binding|INFO|Setting lport faebc160-66b6-4ba2-ab02-2b0098eef804 ovn-installed in OVS
Oct  2 09:10:15 np0005466030 NetworkManager[44960]: <info>  [1759410615.8523] device (tapfaebc160-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:10:15 np0005466030 nova_compute[230518]: 2025-10-02 13:10:15.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:15 np0005466030 NetworkManager[44960]: <info>  [1759410615.8542] device (tapfaebc160-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.863 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de6e7974-b7a2-4437-af20-3200e1a36792]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.864 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54a08602-f1 in ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.866 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54a08602-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.867 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c0da7f6b-5b00-49be-993a-71cdf2475869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.867 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a13918e7-b8e3-4edf-b04c-0ea36e32f92c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:15 np0005466030 systemd-machined[188247]: New machine qemu-94-instance-000000ca.
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.885 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ef137b37-d5ff-4b13-bf4d-515770914e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:15 np0005466030 systemd[1]: Started Virtual Machine qemu-94-instance-000000ca.
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.912 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f07b4e2-143c-4dcc-9dbd-2e79c286ca09]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.951 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[faca5c18-5261-49db-9b38-1a0850a90b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.962 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1ce307-0894-4714-b39a-7ad5606298d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:15 np0005466030 NetworkManager[44960]: <info>  [1759410615.9639] manager: (tap54a08602-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/382)
Oct  2 09:10:15 np0005466030 systemd-udevd[306671]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.007 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[217aa86f-1e9b-499d-bf5b-53b66903b658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.011 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7019cfc1-7f9d-4a45-9171-a1ea1a174f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 NetworkManager[44960]: <info>  [1759410616.0416] device (tap54a08602-f0): carrier: link connected
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.053 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7956671f-da9b-422c-9f75-d00a1aa19ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.081 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0b3d71-9f5d-4d65-871f-46adcfa39c06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54a08602-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:cf:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847959, 'reachable_time': 39519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306704, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.103 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b22375a8-a14a-49aa-ac58-54426b9477bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:cf34'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 847959, 'tstamp': 847959}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306705, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.126 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[999a5f7a-556b-4da0-a188-75bc1f37b8c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54a08602-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:cf:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847959, 'reachable_time': 39519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306706, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:16.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.168 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93700384-757f-4616-8692-816bb2841b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.252 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c0c2fd-c4ef-4efe-8e1d-7dfbe4d73020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.254 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a08602-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.254 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.255 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54a08602-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:16 np0005466030 nova_compute[230518]: 2025-10-02 13:10:16.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:16 np0005466030 NetworkManager[44960]: <info>  [1759410616.2587] manager: (tap54a08602-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct  2 09:10:16 np0005466030 kernel: tap54a08602-f0: entered promiscuous mode
Oct  2 09:10:16 np0005466030 nova_compute[230518]: 2025-10-02 13:10:16.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.268 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54a08602-f0, col_values=(('external_ids', {'iface-id': '0e7b3164-07ea-4170-8a8f-05633e14550f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:16 np0005466030 nova_compute[230518]: 2025-10-02 13:10:16.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:16 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:16Z|00823|binding|INFO|Releasing lport 0e7b3164-07ea-4170-8a8f-05633e14550f from this chassis (sb_readonly=0)
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.273 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.275 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[beaf4a4b-6db3-4f3f-8019-aaf022bac041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.276 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-54a08602-f5b6-41e1-816c-2c122542a2b7
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 54a08602-f5b6-41e1-816c-2c122542a2b7
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:10:16 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.277 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'env', 'PROCESS_TAG=haproxy-54a08602-f5b6-41e1-816c-2c122542a2b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54a08602-f5b6-41e1-816c-2c122542a2b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:10:16 np0005466030 nova_compute[230518]: 2025-10-02 13:10:16.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:16 np0005466030 podman[306780]: 2025-10-02 13:10:16.716304575 +0000 UTC m=+0.066323183 container create 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:10:16 np0005466030 podman[306780]: 2025-10-02 13:10:16.67665222 +0000 UTC m=+0.026670848 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:10:16 np0005466030 systemd[1]: Started libpod-conmon-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410.scope.
Oct  2 09:10:16 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:10:16 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5b4910d155fef2dcdcf0c757ef1bff4d71c46ecff04442d7d49e1a585de238/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:10:16 np0005466030 podman[306780]: 2025-10-02 13:10:16.851522661 +0000 UTC m=+0.201541279 container init 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:10:16 np0005466030 podman[306780]: 2025-10-02 13:10:16.86134716 +0000 UTC m=+0.211365788 container start 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  2 09:10:16 np0005466030 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : New worker (306801) forked
Oct  2 09:10:16 np0005466030 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : Loading success.
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.055 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410617.0543778, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.055 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Started (Lifecycle Event)#033[00m
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.079 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.083 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410617.05478, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.084 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.103 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.109 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:10:17 np0005466030 nova_compute[230518]: 2025-10-02 13:10:17.128 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:10:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:17.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:18.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.475 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.475 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.476 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.476 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.477 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Processing event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.477 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.477 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.478 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.478 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.478 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] No waiting events found dispatching network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.479 2 WARNING nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received unexpected event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 for instance with vm_state building and task_state spawning.#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.480 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.485 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.487 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410618.4853578, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.487 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.492 2 INFO nova.virt.libvirt.driver [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance spawned successfully.#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.493 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.521 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.527 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.532 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.533 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.533 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.534 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.534 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.535 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.558 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.604 2 INFO nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 8.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.604 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.678 2 INFO nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 10.02 seconds to build instance.#033[00m
Oct  2 09:10:18 np0005466030 nova_compute[230518]: 2025-10-02 13:10:18.700 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:19 np0005466030 nova_compute[230518]: 2025-10-02 13:10:19.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 09:10:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:19.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 09:10:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:20.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:20 np0005466030 nova_compute[230518]: 2025-10-02 13:10:20.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Oct  2 09:10:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:21.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:22.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:23.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.079 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.079 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.080 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.080 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.619 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.819 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.821 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.828 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.828 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.829 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.832 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:24 np0005466030 nova_compute[230518]: 2025-10-02 13:10:24.832 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.035 2 DEBUG nova.compute.manager [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.035 2 DEBUG nova.compute.manager [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing instance network info cache due to event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.036 2 DEBUG oslo_concurrency.lockutils [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.036 2 DEBUG oslo_concurrency.lockutils [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.036 2 DEBUG nova.network.neutron [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.084 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.085 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3666MB free_disk=20.739215850830078GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance de995ad8-07bb-4097-899b-5c79d62a1f4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 28cee0c6-0008-45f8-af11-48abbbbcb22c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.248 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.248 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.343 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2201128665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.814 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.824 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.847 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:25 np0005466030 podman[306853]: 2025-10-02 13:10:25.852774412 +0000 UTC m=+0.095156349 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:10:25 np0005466030 podman[306854]: 2025-10-02 13:10:25.861086722 +0000 UTC m=+0.091033779 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.877 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:10:25 np0005466030 nova_compute[230518]: 2025-10-02 13:10:25.879 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:25.966 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:25.967 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:25.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:27 np0005466030 nova_compute[230518]: 2025-10-02 13:10:27.333 2 DEBUG nova.compute.manager [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:27 np0005466030 nova_compute[230518]: 2025-10-02 13:10:27.334 2 DEBUG nova.compute.manager [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing instance network info cache due to event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:27 np0005466030 nova_compute[230518]: 2025-10-02 13:10:27.334 2 DEBUG oslo_concurrency.lockutils [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:27.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:29 np0005466030 nova_compute[230518]: 2025-10-02 13:10:29.107 2 DEBUG nova.network.neutron [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updated VIF entry in instance network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:29 np0005466030 nova_compute[230518]: 2025-10-02 13:10:29.108 2 DEBUG nova.network.neutron [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:29 np0005466030 nova_compute[230518]: 2025-10-02 13:10:29.142 2 DEBUG oslo_concurrency.lockutils [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:29 np0005466030 nova_compute[230518]: 2025-10-02 13:10:29.143 2 DEBUG oslo_concurrency.lockutils [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:29 np0005466030 nova_compute[230518]: 2025-10-02 13:10:29.143 2 DEBUG nova.network.neutron [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.396256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629396316, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 886, "num_deletes": 251, "total_data_size": 1611903, "memory_usage": 1637544, "flush_reason": "Manual Compaction"}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629488642, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1052493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70393, "largest_seqno": 71274, "table_properties": {"data_size": 1048384, "index_size": 1760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9722, "raw_average_key_size": 19, "raw_value_size": 1039964, "raw_average_value_size": 2135, "num_data_blocks": 77, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410575, "oldest_key_time": 1759410575, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 92434 microseconds, and 3347 cpu microseconds.
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.488689) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1052493 bytes OK
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.488710) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.534992) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535040) EVENT_LOG_v1 {"time_micros": 1759410629535030, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535063) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1607322, prev total WAL file size 1607322, number of live WAL files 2.
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535947) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1027KB)], [144(10MB)]
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629536012, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12165146, "oldest_snapshot_seqno": -1}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:29 np0005466030 nova_compute[230518]: 2025-10-02 13:10:29.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9048 keys, 10295783 bytes, temperature: kUnknown
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629631540, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 10295783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10239298, "index_size": 32756, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239986, "raw_average_key_size": 26, "raw_value_size": 10082534, "raw_average_value_size": 1114, "num_data_blocks": 1237, "num_entries": 9048, "num_filter_entries": 9048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.631759) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10295783 bytes
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.636383) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.3 rd, 107.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(21.3) write-amplify(9.8) OK, records in: 9568, records dropped: 520 output_compression: NoCompression
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.636426) EVENT_LOG_v1 {"time_micros": 1759410629636411, "job": 92, "event": "compaction_finished", "compaction_time_micros": 95595, "compaction_time_cpu_micros": 23989, "output_level": 6, "num_output_files": 1, "total_output_size": 10295783, "num_input_records": 9568, "num_output_records": 9048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629636828, "job": 92, "event": "table_file_deletion", "file_number": 146}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629639093, "job": 92, "event": "table_file_deletion", "file_number": 144}
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:10:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:29.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:30.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:30 np0005466030 nova_compute[230518]: 2025-10-02 13:10:30.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:30 np0005466030 nova_compute[230518]: 2025-10-02 13:10:30.875 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:30 np0005466030 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:30 np0005466030 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:30 np0005466030 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:30 np0005466030 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:10:31 np0005466030 nova_compute[230518]: 2025-10-02 13:10:31.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:31 np0005466030 nova_compute[230518]: 2025-10-02 13:10:31.486 2 DEBUG nova.network.neutron [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updated VIF entry in instance network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:31 np0005466030 nova_compute[230518]: 2025-10-02 13:10:31.487 2 DEBUG nova.network.neutron [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:31 np0005466030 nova_compute[230518]: 2025-10-02 13:10:31.532 2 DEBUG oslo_concurrency.lockutils [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:31.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:32.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Oct  2 09:10:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:10:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:33.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:10:34 np0005466030 nova_compute[230518]: 2025-10-02 13:10:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:34 np0005466030 nova_compute[230518]: 2025-10-02 13:10:34.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:10:34 np0005466030 nova_compute[230518]: 2025-10-02 13:10:34.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:10:34 np0005466030 nova_compute[230518]: 2025-10-02 13:10:34.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:34 np0005466030 nova_compute[230518]: 2025-10-02 13:10:34.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:34Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:46:ff 10.100.0.14
Oct  2 09:10:34 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:34Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:46:ff 10.100.0.14
Oct  2 09:10:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:34 np0005466030 nova_compute[230518]: 2025-10-02 13:10:34.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:35 np0005466030 nova_compute[230518]: 2025-10-02 13:10:35.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:35 np0005466030 podman[306900]: 2025-10-02 13:10:35.803100032 +0000 UTC m=+0.057435294 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:10:35 np0005466030 podman[306901]: 2025-10-02 13:10:35.807498981 +0000 UTC m=+0.059053195 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:10:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:35.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:36.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:37.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:38.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:39 np0005466030 nova_compute[230518]: 2025-10-02 13:10:39.127 2 DEBUG nova.compute.manager [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:39 np0005466030 nova_compute[230518]: 2025-10-02 13:10:39.129 2 DEBUG nova.compute.manager [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:39 np0005466030 nova_compute[230518]: 2025-10-02 13:10:39.129 2 DEBUG oslo_concurrency.lockutils [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:39 np0005466030 nova_compute[230518]: 2025-10-02 13:10:39.130 2 DEBUG oslo_concurrency.lockutils [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:39 np0005466030 nova_compute[230518]: 2025-10-02 13:10:39.130 2 DEBUG nova.network.neutron [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:39 np0005466030 nova_compute[230518]: 2025-10-02 13:10:39.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:39.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.091 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.093 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.094 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.095 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.095 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.098 2 INFO nova.compute.manager [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Terminating instance#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.100 2 DEBUG nova.compute.manager [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:10:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:10:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:40.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:10:40 np0005466030 kernel: tapfaebc160-66 (unregistering): left promiscuous mode
Oct  2 09:10:40 np0005466030 NetworkManager[44960]: <info>  [1759410640.2125] device (tapfaebc160-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:10:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:40Z|00824|binding|INFO|Releasing lport faebc160-66b6-4ba2-ab02-2b0098eef804 from this chassis (sb_readonly=0)
Oct  2 09:10:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:40Z|00825|binding|INFO|Setting lport faebc160-66b6-4ba2-ab02-2b0098eef804 down in Southbound
Oct  2 09:10:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:40Z|00826|binding|INFO|Removing iface tapfaebc160-66 ovn-installed in OVS
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.272 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:46:ff 10.100.0.14', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28cee0c6-0008-45f8-af11-48abbbbcb22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54a08602-f5b6-41e1-816c-2c122542a2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1398b7fe-9cb8-4053-9d9c-0523007b5e96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=faebc160-66b6-4ba2-ab02-2b0098eef804) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.273 138374 INFO neutron.agent.ovn.metadata.agent [-] Port faebc160-66b6-4ba2-ab02-2b0098eef804 in datapath 54a08602-f5b6-41e1-816c-2c122542a2b7 unbound from our chassis#033[00m
Oct  2 09:10:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.275 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54a08602-f5b6-41e1-816c-2c122542a2b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.277 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26aa3fc2-1912-4e01-9ff7-842b72c7ed67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:40 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.278 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 namespace which is not needed anymore#033[00m
Oct  2 09:10:40 np0005466030 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Oct  2 09:10:40 np0005466030 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ca.scope: Consumed 15.170s CPU time.
Oct  2 09:10:40 np0005466030 systemd-machined[188247]: Machine qemu-94-instance-000000ca terminated.
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.364 2 INFO nova.virt.libvirt.driver [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance destroyed successfully.#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.366 2 DEBUG nova.objects.instance [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 28cee0c6-0008-45f8-af11-48abbbbcb22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.388 2 DEBUG nova.virt.libvirt.vif [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=202,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:10:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-x8b2athb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:18Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=28cee0c6-0008-45f8-af11-48abbbbcb22c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.389 2 DEBUG nova.network.os_vif_util [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.390 2 DEBUG nova.network.os_vif_util [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.391 2 DEBUG os_vif [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.395 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaebc160-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.408 2 INFO os_vif [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66')#033[00m
Oct  2 09:10:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.496 2 DEBUG nova.compute.manager [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-unplugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.496 2 DEBUG oslo_concurrency.lockutils [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.497 2 DEBUG oslo_concurrency.lockutils [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.497 2 DEBUG oslo_concurrency.lockutils [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.497 2 DEBUG nova.compute.manager [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] No waiting events found dispatching network-vif-unplugged-faebc160-66b6-4ba2-ab02-2b0098eef804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.498 2 DEBUG nova.compute.manager [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-unplugged-faebc160-66b6-4ba2-ab02-2b0098eef804 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:10:40 np0005466030 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : haproxy version is 2.8.14-c23fe91
Oct  2 09:10:40 np0005466030 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : path to executable is /usr/sbin/haproxy
Oct  2 09:10:40 np0005466030 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [ALERT]    (306799) : Current worker (306801) exited with code 143 (Terminated)
Oct  2 09:10:40 np0005466030 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [WARNING]  (306799) : All workers exited. Exiting... (0)
Oct  2 09:10:40 np0005466030 systemd[1]: libpod-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410.scope: Deactivated successfully.
Oct  2 09:10:40 np0005466030 podman[306972]: 2025-10-02 13:10:40.543522808 +0000 UTC m=+0.111479961 container died 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.585 2 DEBUG nova.network.neutron [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.586 2 DEBUG nova.network.neutron [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:40 np0005466030 nova_compute[230518]: 2025-10-02 13:10:40.612 2 DEBUG oslo_concurrency.lockutils [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:40 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410-userdata-shm.mount: Deactivated successfully.
Oct  2 09:10:40 np0005466030 systemd[1]: var-lib-containers-storage-overlay-8e5b4910d155fef2dcdcf0c757ef1bff4d71c46ecff04442d7d49e1a585de238-merged.mount: Deactivated successfully.
Oct  2 09:10:41 np0005466030 podman[306972]: 2025-10-02 13:10:41.022070016 +0000 UTC m=+0.590027139 container cleanup 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:10:41 np0005466030 systemd[1]: libpod-conmon-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410.scope: Deactivated successfully.
Oct  2 09:10:41 np0005466030 nova_compute[230518]: 2025-10-02 13:10:41.082 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:10:41 np0005466030 podman[307021]: 2025-10-02 13:10:41.151312913 +0000 UTC m=+0.095457558 container remove 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.165 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[945224bc-f37e-4db3-aa38-8bee7f326e3f]: (4, ('Thu Oct  2 01:10:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 (13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410)\n13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410\nThu Oct  2 01:10:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 (13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410)\n13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.169 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dc974a55-e174-4258-8024-0cdd076d3824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.170 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a08602-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:41 np0005466030 nova_compute[230518]: 2025-10-02 13:10:41.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:41 np0005466030 kernel: tap54a08602-f0: left promiscuous mode
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.179 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d778515-1b0e-4802-a240-57550252120b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:41 np0005466030 nova_compute[230518]: 2025-10-02 13:10:41.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.211 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4066344c-9c3f-433a-9122-7e15bbba0f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.213 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[53eb02d5-e615-44d8-b36b-dbdbc6490dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.231 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[99b7fb87-2042-452c-be30-b9443f4a4e20]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847950, 'reachable_time': 40637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307037, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:41 np0005466030 systemd[1]: run-netns-ovnmeta\x2d54a08602\x2df5b6\x2d41e1\x2d816c\x2d2c122542a2b7.mount: Deactivated successfully.
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.236 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:10:41 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.236 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ba17e448-94be-4c3c-8912-1548599711f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:41.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:42 np0005466030 nova_compute[230518]: 2025-10-02 13:10:42.576 2 DEBUG nova.compute.manager [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:42 np0005466030 nova_compute[230518]: 2025-10-02 13:10:42.577 2 DEBUG oslo_concurrency.lockutils [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:42 np0005466030 nova_compute[230518]: 2025-10-02 13:10:42.577 2 DEBUG oslo_concurrency.lockutils [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:42 np0005466030 nova_compute[230518]: 2025-10-02 13:10:42.577 2 DEBUG oslo_concurrency.lockutils [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:42 np0005466030 nova_compute[230518]: 2025-10-02 13:10:42.578 2 DEBUG nova.compute.manager [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] No waiting events found dispatching network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:42 np0005466030 nova_compute[230518]: 2025-10-02 13:10:42.578 2 WARNING nova.compute.manager [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received unexpected event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:10:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:44.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.133 2 INFO nova.virt.libvirt.driver [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deleting instance files /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c_del#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.134 2 INFO nova.virt.libvirt.driver [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deletion of /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c_del complete#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.188 2 INFO nova.compute.manager [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 4.09 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.189 2 DEBUG oslo.service.loopingcall [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.189 2 DEBUG nova.compute.manager [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.189 2 DEBUG nova.network.neutron [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:10:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:44.752 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:44 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:44.754 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.777 2 DEBUG nova.network.neutron [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.796 2 INFO nova.compute.manager [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 0.61 seconds to deallocate network for instance.#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.852 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.853 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.893 2 DEBUG nova.compute.manager [req-91d0db88-f5b5-4ef2-8614-a1c3dae66d0a req-2768e154-3325-4792-b6dd-9da6db284ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-deleted-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:44 np0005466030 nova_compute[230518]: 2025-10-02 13:10:44.935 2 DEBUG oslo_concurrency.processutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:10:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1341714694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.422 2 DEBUG oslo_concurrency.processutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.430 2 DEBUG nova.compute.provider_tree [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.446 2 DEBUG nova.scheduler.client.report [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.474 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.522 2 INFO nova.scheduler.client.report [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 28cee0c6-0008-45f8-af11-48abbbbcb22c#033[00m
Oct  2 09:10:45 np0005466030 nova_compute[230518]: 2025-10-02 13:10:45.628 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:48.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:49 np0005466030 nova_compute[230518]: 2025-10-02 13:10:49.713 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:49 np0005466030 nova_compute[230518]: 2025-10-02 13:10:49.714 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:49 np0005466030 nova_compute[230518]: 2025-10-02 13:10:49.714 2 DEBUG nova.network.neutron [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:10:49 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:49.756 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:50 np0005466030 nova_compute[230518]: 2025-10-02 13:10:50.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:50 np0005466030 nova_compute[230518]: 2025-10-02 13:10:50.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:50 np0005466030 nova_compute[230518]: 2025-10-02 13:10:50.790 2 DEBUG nova.network.neutron [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:10:50 np0005466030 nova_compute[230518]: 2025-10-02 13:10:50.818 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:10:50 np0005466030 nova_compute[230518]: 2025-10-02 13:10:50.925 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Oct  2 09:10:50 np0005466030 nova_compute[230518]: 2025-10-02 13:10:50.926 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Creating file /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Oct  2 09:10:50 np0005466030 nova_compute[230518]: 2025-10-02 13:10:50.927 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:10:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:10:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:10:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:10:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:10:51 np0005466030 nova_compute[230518]: 2025-10-02 13:10:51.379 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp" returned: 1 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:51 np0005466030 nova_compute[230518]: 2025-10-02 13:10:51.381 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Oct  2 09:10:51 np0005466030 nova_compute[230518]: 2025-10-02 13:10:51.382 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Creating directory /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Oct  2 09:10:51 np0005466030 nova_compute[230518]: 2025-10-02 13:10:51.383 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:10:51 np0005466030 nova_compute[230518]: 2025-10-02 13:10:51.634 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:10:51 np0005466030 nova_compute[230518]: 2025-10-02 13:10:51.642 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  2 09:10:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:52.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:10:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:54.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:10:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:54 np0005466030 kernel: tap513c3d66-61 (unregistering): left promiscuous mode
Oct  2 09:10:54 np0005466030 NetworkManager[44960]: <info>  [1759410654.2637] device (tap513c3d66-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:10:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:54Z|00827|binding|INFO|Releasing lport 513c3d66-613d-4626-8ab0-58520113de32 from this chassis (sb_readonly=0)
Oct  2 09:10:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:54Z|00828|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 down in Southbound
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:54Z|00829|binding|INFO|Removing iface tap513c3d66-61 ovn-installed in OVS
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.285 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:bc:4e 10.100.0.4'], port_security=['fa:16:3e:9a:bc:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'de995ad8-07bb-4097-899b-5c79d62a1f4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=513c3d66-613d-4626-8ab0-58520113de32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.286 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 513c3d66-613d-4626-8ab0-58520113de32 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.288 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddeecc3-05c4-404e-ab79-98bc07c62bf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.343 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[276b5c33-7d9b-4f12-945d-df2123afbb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.345 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc1e8c0-cd2b-44cb-802f-22210ed46598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:54 np0005466030 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Oct  2 09:10:54 np0005466030 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c8.scope: Consumed 16.058s CPU time.
Oct  2 09:10:54 np0005466030 systemd-machined[188247]: Machine qemu-93-instance-000000c8 terminated.
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.377 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0314a062-6536-445d-85e3-0529eaae0dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.398 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31573ae3-e3d1-4b35-8520-54908d263a51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 19345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307206, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4c147a-cbb8-48bf-9046-fe06ff4873b7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840868, 'tstamp': 840868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307207, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840870, 'tstamp': 840870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307207, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.427 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.435 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.435 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.436 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.436 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.558 2 DEBUG nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.558 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.559 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.559 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.559 2 DEBUG nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.560 2 WARNING nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_migrating.#033[00m
Oct  2 09:10:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.662 2 INFO nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance shutdown successfully after 3 seconds.#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.668 2 INFO nova.virt.libvirt.driver [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance destroyed successfully.#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.669 2 DEBUG nova.virt.libvirt.vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.670 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.671 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.672 2 DEBUG os_vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513c3d66-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.685 2 INFO os_vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.948 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.948 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:54 np0005466030 nova_compute[230518]: 2025-10-02 13:10:54.949 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.363 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410640.361377, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.363 2 INFO nova.compute.manager [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.392 2 DEBUG nova.compute.manager [None req-e633623c-6c61-4db5-99d8-6d93a2c30ef8 - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.831 2 DEBUG neutronclient.v2_0.client [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 513c3d66-613d-4626-8ab0-58520113de32 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.976 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.976 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:55 np0005466030 nova_compute[230518]: 2025-10-02 13:10:55.977 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:10:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:10:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:56 np0005466030 nova_compute[230518]: 2025-10-02 13:10:56.711 2 DEBUG nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:56 np0005466030 nova_compute[230518]: 2025-10-02 13:10:56.712 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:10:56 np0005466030 nova_compute[230518]: 2025-10-02 13:10:56.713 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:10:56 np0005466030 nova_compute[230518]: 2025-10-02 13:10:56.714 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:10:56 np0005466030 nova_compute[230518]: 2025-10-02 13:10:56.714 2 DEBUG nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:10:56 np0005466030 nova_compute[230518]: 2025-10-02 13:10:56.714 2 WARNING nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_migrated.#033[00m
Oct  2 09:10:56 np0005466030 podman[307221]: 2025-10-02 13:10:56.843161384 +0000 UTC m=+0.087005263 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:10:56 np0005466030 podman[307220]: 2025-10-02 13:10:56.895533428 +0000 UTC m=+0.139659027 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:10:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:58.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:58 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:10:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:10:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:10:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:58.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:10:59 np0005466030 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG nova.compute.manager [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:10:59 np0005466030 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG nova.compute.manager [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:10:59 np0005466030 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:10:59 np0005466030 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:10:59 np0005466030 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:10:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:10:59 np0005466030 nova_compute[230518]: 2025-10-02 13:10:59.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:10:59 np0005466030 ovn_controller[129257]: 2025-10-02T13:10:59Z|00830|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:10:59 np0005466030 nova_compute[230518]: 2025-10-02 13:10:59.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:00.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:00 np0005466030 nova_compute[230518]: 2025-10-02 13:11:00.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:02.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:02.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:02 np0005466030 nova_compute[230518]: 2025-10-02 13:11:02.300 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:11:02 np0005466030 nova_compute[230518]: 2025-10-02 13:11:02.302 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:02 np0005466030 nova_compute[230518]: 2025-10-02 13:11:02.329 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:11:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043214721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:11:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Oct  2 09:11:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:04.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:04.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:04 np0005466030 nova_compute[230518]: 2025-10-02 13:11:04.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:04 np0005466030 nova_compute[230518]: 2025-10-02 13:11:04.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:11:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:11:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:11:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:11:05 np0005466030 nova_compute[230518]: 2025-10-02 13:11:05.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:06 np0005466030 podman[307314]: 2025-10-02 13:11:06.867905631 +0000 UTC m=+0.098165074 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:11:06 np0005466030 podman[307315]: 2025-10-02 13:11:06.887260078 +0000 UTC m=+0.116463498 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Oct  2 09:11:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:08.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:08 np0005466030 nova_compute[230518]: 2025-10-02 13:11:08.463 2 DEBUG nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:08 np0005466030 nova_compute[230518]: 2025-10-02 13:11:08.464 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:08 np0005466030 nova_compute[230518]: 2025-10-02 13:11:08.464 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:08 np0005466030 nova_compute[230518]: 2025-10-02 13:11:08.465 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:08 np0005466030 nova_compute[230518]: 2025-10-02 13:11:08.465 2 DEBUG nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:11:08 np0005466030 nova_compute[230518]: 2025-10-02 13:11:08.465 2 WARNING nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_finish.#033[00m
Oct  2 09:11:09 np0005466030 nova_compute[230518]: 2025-10-02 13:11:09.501 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410654.500055, de995ad8-07bb-4097-899b-5c79d62a1f4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:09 np0005466030 nova_compute[230518]: 2025-10-02 13:11:09.501 2 INFO nova.compute.manager [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:11:09 np0005466030 nova_compute[230518]: 2025-10-02 13:11:09.542 2 DEBUG nova.compute.manager [None req-f49fe3bb-cf40-4167-b18e-f96ae61c2ee3 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:09 np0005466030 nova_compute[230518]: 2025-10-02 13:11:09.550 2 DEBUG nova.compute.manager [None req-f49fe3bb-cf40-4167-b18e-f96ae61c2ee3 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:11:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:09 np0005466030 nova_compute[230518]: 2025-10-02 13:11:09.598 2 INFO nova.compute.manager [None req-f49fe3bb-cf40-4167-b18e-f96ae61c2ee3 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com#033[00m
Oct  2 09:11:09 np0005466030 nova_compute[230518]: 2025-10-02 13:11:09.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:10.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.636 2 DEBUG nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.637 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.637 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.638 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.638 2 DEBUG nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:11:10 np0005466030 nova_compute[230518]: 2025-10-02 13:11:10.638 2 WARNING nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state resized and task_state None.#033[00m
Oct  2 09:11:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:12.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:12 np0005466030 nova_compute[230518]: 2025-10-02 13:11:12.834 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:12 np0005466030 nova_compute[230518]: 2025-10-02 13:11:12.834 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:12 np0005466030 nova_compute[230518]: 2025-10-02 13:11:12.835 2 DEBUG nova.compute.manager [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Oct  2 09:11:13 np0005466030 nova_compute[230518]: 2025-10-02 13:11:13.524 2 DEBUG neutronclient.v2_0.client [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 513c3d66-613d-4626-8ab0-58520113de32 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  2 09:11:13 np0005466030 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:13 np0005466030 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:13 np0005466030 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG nova.network.neutron [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:11:13 np0005466030 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG nova.objects.instance [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'info_cache' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:14.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:14 np0005466030 nova_compute[230518]: 2025-10-02 13:11:14.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:15 np0005466030 nova_compute[230518]: 2025-10-02 13:11:15.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:16.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:16 np0005466030 nova_compute[230518]: 2025-10-02 13:11:16.311 2 DEBUG nova.network.neutron [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:16 np0005466030 nova_compute[230518]: 2025-10-02 13:11:16.440 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:16 np0005466030 nova_compute[230518]: 2025-10-02 13:11:16.441 2 DEBUG nova.objects.instance [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:16 np0005466030 nova_compute[230518]: 2025-10-02 13:11:16.641 2 DEBUG nova.storage.rbd_utils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] removing snapshot(nova-resize) on rbd image(de995ad8-07bb-4097-899b-5c79d62a1f4c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  2 09:11:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:18.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.040 2 DEBUG nova.virt.libvirt.vif [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:11:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:11:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.041 2 DEBUG nova.network.os_vif_util [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.043 2 DEBUG nova.network.os_vif_util [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.046 2 DEBUG os_vif [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513c3d66-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.053 2 INFO os_vif [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.054 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.055 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.222 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.229 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.229 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.263 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.264 2 DEBUG nova.compute.provider_tree [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.274 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.293 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.331 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.426 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.508 2 DEBUG oslo_concurrency.processutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:19 np0005466030 nova_compute[230518]: 2025-10-02 13:11:19.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1781545665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:20 np0005466030 nova_compute[230518]: 2025-10-02 13:11:20.035 2 DEBUG oslo_concurrency.processutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:20 np0005466030 nova_compute[230518]: 2025-10-02 13:11:20.041 2 DEBUG nova.compute.provider_tree [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:11:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:20.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:20 np0005466030 nova_compute[230518]: 2025-10-02 13:11:20.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:21 np0005466030 nova_compute[230518]: 2025-10-02 13:11:21.709 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:11:21 np0005466030 nova_compute[230518]: 2025-10-02 13:11:21.935 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:21 np0005466030 nova_compute[230518]: 2025-10-02 13:11:21.939 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:21 np0005466030 nova_compute[230518]: 2025-10-02 13:11:21.969 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:11:21 np0005466030 nova_compute[230518]: 2025-10-02 13:11:21.969 2 INFO nova.compute.claims [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:11:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.141 2 INFO nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocation for migration 3176e491-7bce-457b-bb5e-0b1a709da887#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.219 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.254 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 9.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:22.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2051422908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.681 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.690 2 DEBUG nova.compute.provider_tree [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.711 2 DEBUG nova.scheduler.client.report [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.749 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.749 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.851 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.851 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.892 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:11:22 np0005466030 nova_compute[230518]: 2025-10-02 13:11:22.920 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.063 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.065 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.065 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Creating image(s)#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.178 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.230 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.281 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.288 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.364 2 DEBUG nova.policy [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.402 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.403 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.404 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.404 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.439 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:23 np0005466030 nova_compute[230518]: 2025-10-02 13:11:23.446 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:24.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.084 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:24.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2097975945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.699 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.815 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:24 np0005466030 nova_compute[230518]: 2025-10-02 13:11:24.816 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.091 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3951MB free_disk=20.805503845214844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.231 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.231 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 5ecce258-097c-4a5a-9c44-087e8129ceaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.231 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.232 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.236 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.789s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.293 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Successfully created port: 9fd04bf3-73c2-4224-81ff-32ef5640604b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.365 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:25 np0005466030 nova_compute[230518]: 2025-10-02 13:11:25.733 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:25.967 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:25.968 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:26.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Oct  2 09:11:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:11:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2413210923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:11:26 np0005466030 nova_compute[230518]: 2025-10-02 13:11:26.559 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:26 np0005466030 nova_compute[230518]: 2025-10-02 13:11:26.565 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:11:26 np0005466030 nova_compute[230518]: 2025-10-02 13:11:26.579 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Successfully updated port: 9fd04bf3-73c2-4224-81ff-32ef5640604b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:11:27 np0005466030 nova_compute[230518]: 2025-10-02 13:11:27.634 2 DEBUG nova.objects.instance [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ecce258-097c-4a5a-9c44-087e8129ceaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:27 np0005466030 podman[307646]: 2025-10-02 13:11:27.833223253 +0000 UTC m=+0.081232802 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:11:27 np0005466030 podman[307645]: 2025-10-02 13:11:27.895751616 +0000 UTC m=+0.136018051 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:11:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:28.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.515 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.695 2 DEBUG nova.compute.manager [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.696 2 DEBUG nova.compute.manager [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing instance network info cache due to event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.697 2 DEBUG oslo_concurrency.lockutils [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.697 2 DEBUG oslo_concurrency.lockutils [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.697 2 DEBUG nova.network.neutron [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.771 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.776 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.776 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Ensure instance console log exists: /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.777 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.777 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.778 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.803 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.803 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:28 np0005466030 nova_compute[230518]: 2025-10-02 13:11:28.804 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:29 np0005466030 nova_compute[230518]: 2025-10-02 13:11:29.150 2 DEBUG nova.network.neutron [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:11:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:29 np0005466030 nova_compute[230518]: 2025-10-02 13:11:29.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:29 np0005466030 nova_compute[230518]: 2025-10-02 13:11:29.734 2 DEBUG nova.network.neutron [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:29 np0005466030 nova_compute[230518]: 2025-10-02 13:11:29.758 2 DEBUG oslo_concurrency.lockutils [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:29 np0005466030 nova_compute[230518]: 2025-10-02 13:11:29.759 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:29 np0005466030 nova_compute[230518]: 2025-10-02 13:11:29.759 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:11:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:30.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:30 np0005466030 nova_compute[230518]: 2025-10-02 13:11:30.375 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:11:30 np0005466030 nova_compute[230518]: 2025-10-02 13:11:30.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.684 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.906 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.906 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance network_info: |[{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.908 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start _get_guest_xml network_info=[{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.913 2 WARNING nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.918 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.918 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.922 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.923 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.924 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.925 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.925 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.925 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:11:31 np0005466030 nova_compute[230518]: 2025-10-02 13:11:31.930 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:32.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:11:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2188314948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:11:32 np0005466030 nova_compute[230518]: 2025-10-02 13:11:32.410 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:32 np0005466030 nova_compute[230518]: 2025-10-02 13:11:32.440 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:32 np0005466030 nova_compute[230518]: 2025-10-02 13:11:32.445 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:11:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1694941547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.131 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.133 2 DEBUG nova.virt.libvirt.vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=203,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-i85j97wy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:11:22Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=5ecce258-097c-4a5a-9c44-087e8129ceaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.134 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.135 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.136 2 DEBUG nova.objects.instance [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ecce258-097c-4a5a-9c44-087e8129ceaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.155 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <uuid>5ecce258-097c-4a5a-9c44-087e8129ceaf</uuid>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <name>instance-000000cb</name>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242</nova:name>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:11:31</nova:creationTime>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <nova:port uuid="9fd04bf3-73c2-4224-81ff-32ef5640604b">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <entry name="serial">5ecce258-097c-4a5a-9c44-087e8129ceaf</entry>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <entry name="uuid">5ecce258-097c-4a5a-9c44-087e8129ceaf</entry>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/5ecce258-097c-4a5a-9c44-087e8129ceaf_disk">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:ef:26:39"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <target dev="tap9fd04bf3-73"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/console.log" append="off"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:11:33 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:11:33 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:11:33 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:11:33 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.156 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Preparing to wait for external event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.157 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.157 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.158 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.158 2 DEBUG nova.virt.libvirt.vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=203,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-i85j97wy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:11:22Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=5ecce258-097c-4a5a-9c44-087e8129ceaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.159 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.159 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.160 2 DEBUG os_vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.161 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.161 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fd04bf3-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fd04bf3-73, col_values=(('external_ids', {'iface-id': '9fd04bf3-73c2-4224-81ff-32ef5640604b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:26:39', 'vm-uuid': '5ecce258-097c-4a5a-9c44-087e8129ceaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:33 np0005466030 NetworkManager[44960]: <info>  [1759410693.1666] manager: (tap9fd04bf3-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.172 2 INFO os_vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73')#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.272 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.273 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.273 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:ef:26:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.274 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Using config drive#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.304 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.833 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.833 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.834 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.834 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.834 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:33 np0005466030 nova_compute[230518]: 2025-10-02 13:11:33.835 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:11:34 np0005466030 nova_compute[230518]: 2025-10-02 13:11:34.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:34 np0005466030 nova_compute[230518]: 2025-10-02 13:11:34.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:11:34 np0005466030 nova_compute[230518]: 2025-10-02 13:11:34.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:11:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.276 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.334 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Creating config drive at /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.339 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwpov1ldt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.454 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.455 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.456 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.456 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.481 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwpov1ldt" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.516 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.521 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.901 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.902 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deleting local config drive /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config because it was imported into RBD.#033[00m
Oct  2 09:11:35 np0005466030 kernel: tap9fd04bf3-73: entered promiscuous mode
Oct  2 09:11:35 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:35Z|00831|binding|INFO|Claiming lport 9fd04bf3-73c2-4224-81ff-32ef5640604b for this chassis.
Oct  2 09:11:35 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:35Z|00832|binding|INFO|9fd04bf3-73c2-4224-81ff-32ef5640604b: Claiming fa:16:3e:ef:26:39 10.100.0.12
Oct  2 09:11:35 np0005466030 NetworkManager[44960]: <info>  [1759410695.9712] manager: (tap9fd04bf3-73): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.980 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:26:39 10.100.0.12'], port_security=['fa:16:3e:ef:26:39 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5ecce258-097c-4a5a-9c44-087e8129ceaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'caab64a4-2f87-4e39-a0ac-b96f95aae4c5 e9085353-0bf0-4de8-ac60-c4d68c9ff284', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94f92e0-9e2a-42b5-8a3e-79ddfa458897, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9fd04bf3-73c2-4224-81ff-32ef5640604b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:11:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.981 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd04bf3-73c2-4224-81ff-32ef5640604b in datapath dac20349-4f21-4aeb-a4a7-d775590cb44a bound to our chassis#033[00m
Oct  2 09:11:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.982 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dac20349-4f21-4aeb-a4a7-d775590cb44a#033[00m
Oct  2 09:11:35 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:35Z|00833|binding|INFO|Setting lport 9fd04bf3-73c2-4224-81ff-32ef5640604b ovn-installed in OVS
Oct  2 09:11:35 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:35Z|00834|binding|INFO|Setting lport 9fd04bf3-73c2-4224-81ff-32ef5640604b up in Southbound
Oct  2 09:11:35 np0005466030 nova_compute[230518]: 2025-10-02 13:11:35.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.995 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7b8f52-6b59-4504-b990-750e26f15467]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.996 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdac20349-41 in ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:11:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.998 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdac20349-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:11:35 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.998 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33691f87-be80-4bf9-b676-a0999284364b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.999 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c769055-6560-4b6f-8614-063297389f17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 systemd-udevd[307823]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.012 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac45995-9c1c-4cf5-ac52-4adb65f731b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 NetworkManager[44960]: <info>  [1759410696.0215] device (tap9fd04bf3-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:11:36 np0005466030 NetworkManager[44960]: <info>  [1759410696.0228] device (tap9fd04bf3-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:11:36 np0005466030 systemd-machined[188247]: New machine qemu-95-instance-000000cb.
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.036 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[413c7946-846f-4364-af68-394106f8efaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 systemd[1]: Started Virtual Machine qemu-95-instance-000000cb.
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.078 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16166b4e-f932-4be1-8710-8d7d87efcca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.084 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89d74c4f-3f1a-42c0-94c5-56108b7596d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 NetworkManager[44960]: <info>  [1759410696.0860] manager: (tapdac20349-40): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.137 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[76b432e0-3488-45e9-af53-5886c2955ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.141 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34d45441-317e-421b-833e-4a08d88e6ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 NetworkManager[44960]: <info>  [1759410696.1756] device (tapdac20349-40): carrier: link connected
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.188 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f4560c9c-7c7a-48f4-8f9c-b4dae3d9353c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.208 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb7aab7-98c1-463d-a6df-e19b7c3640eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdac20349-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:d8:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855973, 'reachable_time': 44947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307858, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.230 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c44d39-8b75-4083-b1c1-9f0a5c1b6a4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:d8a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 855973, 'tstamp': 855973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307859, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.247 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ece3819a-60cd-4af8-bb11-4477ba30bcba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdac20349-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:d8:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855973, 'reachable_time': 44947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307860, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.291 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d294d7-89a9-4f26-af83-92e8e94b460c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:36.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.368 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca3d03e-4d3b-47c1-8940-eb615dba8dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.370 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac20349-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.370 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.371 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdac20349-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:36 np0005466030 NetworkManager[44960]: <info>  [1759410696.3735] manager: (tapdac20349-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct  2 09:11:36 np0005466030 kernel: tapdac20349-40: entered promiscuous mode
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.376 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdac20349-40, col_values=(('external_ids', {'iface-id': '71ea06ee-2e8d-4617-a491-cbc5589b4465'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:36 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:36Z|00835|binding|INFO|Releasing lport 71ea06ee-2e8d-4617-a491-cbc5589b4465 from this chassis (sb_readonly=0)
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.379 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.380 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9b831838-321c-4e24-bf33-b28bc83afee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.381 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-dac20349-4f21-4aeb-a4a7-d775590cb44a
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID dac20349-4f21-4aeb-a4a7-d775590cb44a
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:11:36 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.383 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'env', 'PROCESS_TAG=haproxy-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dac20349-4f21-4aeb-a4a7-d775590cb44a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:11:36 np0005466030 nova_compute[230518]: 2025-10-02 13:11:36.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:36 np0005466030 nova_compute[230518]: 2025-10-02 13:11:36.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:36 np0005466030 nova_compute[230518]: 2025-10-02 13:11:36.563 2 DEBUG nova.compute.manager [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:36 np0005466030 nova_compute[230518]: 2025-10-02 13:11:36.564 2 DEBUG oslo_concurrency.lockutils [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:36 np0005466030 nova_compute[230518]: 2025-10-02 13:11:36.564 2 DEBUG oslo_concurrency.lockutils [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:36 np0005466030 nova_compute[230518]: 2025-10-02 13:11:36.564 2 DEBUG oslo_concurrency.lockutils [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:36 np0005466030 nova_compute[230518]: 2025-10-02 13:11:36.565 2 DEBUG nova.compute.manager [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Processing event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:11:36 np0005466030 podman[307892]: 2025-10-02 13:11:36.795258492 +0000 UTC m=+0.026851565 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:11:37 np0005466030 podman[307892]: 2025-10-02 13:11:37.192518596 +0000 UTC m=+0.424111649 container create 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 09:11:37 np0005466030 systemd[1]: Started libpod-conmon-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8.scope.
Oct  2 09:11:37 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:11:37 np0005466030 podman[307923]: 2025-10-02 13:11:37.327030139 +0000 UTC m=+0.102183799 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:11:37 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39bcf2783cfa7733fec5ee8c9f0cd2433751285a50f27407623ad86aee7b446/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:11:37 np0005466030 podman[307892]: 2025-10-02 13:11:37.369767841 +0000 UTC m=+0.601360904 container init 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 09:11:37 np0005466030 podman[307924]: 2025-10-02 13:11:37.373871741 +0000 UTC m=+0.133197595 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:11:37 np0005466030 podman[307892]: 2025-10-02 13:11:37.375666897 +0000 UTC m=+0.607259960 container start 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:11:37 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : New worker (307997) forked
Oct  2 09:11:37 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : Loading success.
Oct  2 09:11:37 np0005466030 nova_compute[230518]: 2025-10-02 13:11:37.766 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:37 np0005466030 nova_compute[230518]: 2025-10-02 13:11:37.839 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410697.8389783, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:37 np0005466030 nova_compute[230518]: 2025-10-02 13:11:37.840 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Started (Lifecycle Event)#033[00m
Oct  2 09:11:37 np0005466030 nova_compute[230518]: 2025-10-02 13:11:37.843 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:11:37 np0005466030 nova_compute[230518]: 2025-10-02 13:11:37.848 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:11:37 np0005466030 nova_compute[230518]: 2025-10-02 13:11:37.852 2 INFO nova.virt.libvirt.driver [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance spawned successfully.#033[00m
Oct  2 09:11:37 np0005466030 nova_compute[230518]: 2025-10-02 13:11:37.852 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:11:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:11:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:38.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.617 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.618 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.618 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.618 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.620 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.621 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.621 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.621 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.622 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.622 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.624 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.625 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.625 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.628 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.780 2 DEBUG nova.compute.manager [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.781 2 DEBUG oslo_concurrency.lockutils [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.782 2 DEBUG oslo_concurrency.lockutils [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.782 2 DEBUG oslo_concurrency.lockutils [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.782 2 DEBUG nova.compute.manager [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] No waiting events found dispatching network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.783 2 WARNING nova.compute.manager [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received unexpected event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b for instance with vm_state building and task_state spawning.#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.825 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.826 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410697.8398361, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.826 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.902 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.906 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410697.847628, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.906 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.944 2 INFO nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 15.88 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:11:38 np0005466030 nova_compute[230518]: 2025-10-02 13:11:38.945 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:39 np0005466030 nova_compute[230518]: 2025-10-02 13:11:39.005 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:11:39 np0005466030 nova_compute[230518]: 2025-10-02 13:11:39.009 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:11:39 np0005466030 nova_compute[230518]: 2025-10-02 13:11:39.047 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:11:39 np0005466030 nova_compute[230518]: 2025-10-02 13:11:39.082 2 INFO nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 19.69 seconds to build instance.#033[00m
Oct  2 09:11:39 np0005466030 nova_compute[230518]: 2025-10-02 13:11:39.174 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:11:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:40.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:40 np0005466030 nova_compute[230518]: 2025-10-02 13:11:40.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:40Z|00836|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:11:40 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:40Z|00837|binding|INFO|Releasing lport 71ea06ee-2e8d-4617-a491-cbc5589b4465 from this chassis (sb_readonly=0)
Oct  2 09:11:40 np0005466030 nova_compute[230518]: 2025-10-02 13:11:40.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:42.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:43 np0005466030 nova_compute[230518]: 2025-10-02 13:11:43.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 09:11:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 09:11:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:45 np0005466030 nova_compute[230518]: 2025-10-02 13:11:45.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:11:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:46.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:11:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:46.336 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:11:46 np0005466030 nova_compute[230518]: 2025-10-02 13:11:46.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:46 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:46.340 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:11:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:11:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:48.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:11:48 np0005466030 nova_compute[230518]: 2025-10-02 13:11:48.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:48.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:48 np0005466030 nova_compute[230518]: 2025-10-02 13:11:48.715 2 DEBUG nova.compute.manager [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:11:48 np0005466030 nova_compute[230518]: 2025-10-02 13:11:48.716 2 DEBUG nova.compute.manager [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing instance network info cache due to event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:11:48 np0005466030 nova_compute[230518]: 2025-10-02 13:11:48.717 2 DEBUG oslo_concurrency.lockutils [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:11:48 np0005466030 nova_compute[230518]: 2025-10-02 13:11:48.718 2 DEBUG oslo_concurrency.lockutils [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:11:48 np0005466030 nova_compute[230518]: 2025-10-02 13:11:48.719 2 DEBUG nova.network.neutron [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:11:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:50.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:50.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:50 np0005466030 nova_compute[230518]: 2025-10-02 13:11:50.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:51 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:51Z|00838|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct  2 09:11:51 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:51Z|00839|binding|INFO|Releasing lport 71ea06ee-2e8d-4617-a491-cbc5589b4465 from this chassis (sb_readonly=0)
Oct  2 09:11:51 np0005466030 nova_compute[230518]: 2025-10-02 13:11:51.698 2 DEBUG nova.network.neutron [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updated VIF entry in instance network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:11:51 np0005466030 nova_compute[230518]: 2025-10-02 13:11:51.699 2 DEBUG nova.network.neutron [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:11:51 np0005466030 nova_compute[230518]: 2025-10-02 13:11:51.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:51 np0005466030 nova_compute[230518]: 2025-10-02 13:11:51.766 2 DEBUG oslo_concurrency.lockutils [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:11:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:52.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:52.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:53 np0005466030 nova_compute[230518]: 2025-10-02 13:11:53.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:11:53.343 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:11:53 np0005466030 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct  2 09:11:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:54.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:54.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:11:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:54Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:26:39 10.100.0.12
Oct  2 09:11:54 np0005466030 ovn_controller[129257]: 2025-10-02T13:11:54Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:26:39 10.100.0.12
Oct  2 09:11:55 np0005466030 nova_compute[230518]: 2025-10-02 13:11:55.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:56.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:11:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:56.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:11:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:58.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:58 np0005466030 nova_compute[230518]: 2025-10-02 13:11:58.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:11:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:11:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:11:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:58.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:11:58 np0005466030 podman[308032]: 2025-10-02 13:11:58.37370906 +0000 UTC m=+0.104620006 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:11:58 np0005466030 podman[308031]: 2025-10-02 13:11:58.399153919 +0000 UTC m=+0.125267655 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:11:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:11:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:11:59 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:11:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:00 np0005466030 nova_compute[230518]: 2025-10-02 13:12:00.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:02.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:03 np0005466030 nova_compute[230518]: 2025-10-02 13:12:03.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:04.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:04.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:05 np0005466030 nova_compute[230518]: 2025-10-02 13:12:05.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:12:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:12:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:06.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:07 np0005466030 podman[308233]: 2025-10-02 13:12:07.830594269 +0000 UTC m=+0.071562848 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:12:07 np0005466030 podman[308232]: 2025-10-02 13:12:07.837565028 +0000 UTC m=+0.080051225 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true)
Oct  2 09:12:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:08 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:12:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:08.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:08 np0005466030 nova_compute[230518]: 2025-10-02 13:12:08.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:10.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:10.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:10 np0005466030 nova_compute[230518]: 2025-10-02 13:12:10.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:11 np0005466030 nova_compute[230518]: 2025-10-02 13:12:11.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Oct  2 09:12:13 np0005466030 nova_compute[230518]: 2025-10-02 13:12:13.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Oct  2 09:12:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:14.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:14.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:15 np0005466030 nova_compute[230518]: 2025-10-02 13:12:15.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:16.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:16.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:18.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:18 np0005466030 nova_compute[230518]: 2025-10-02 13:12:18.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:18.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:18 np0005466030 nova_compute[230518]: 2025-10-02 13:12:18.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:19 np0005466030 nova_compute[230518]: 2025-10-02 13:12:19.260 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:19 np0005466030 nova_compute[230518]: 2025-10-02 13:12:19.260 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:12:19 np0005466030 nova_compute[230518]: 2025-10-02 13:12:19.278 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:12:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:20.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:20 np0005466030 nova_compute[230518]: 2025-10-02 13:12:20.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466030 nova_compute[230518]: 2025-10-02 13:12:21.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Oct  2 09:12:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:22.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:22.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:23 np0005466030 nova_compute[230518]: 2025-10-02 13:12:23.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Oct  2 09:12:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:24.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:24.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.104 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.104 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2888907482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.617 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.719 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.720 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.725 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.726 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.967 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.968 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3792MB free_disk=20.739166259765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:12:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:12:25.968 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.968 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:12:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:12:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:25 np0005466030 nova_compute[230518]: 2025-10-02 13:12:25.969 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:12:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:12:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:26 np0005466030 nova_compute[230518]: 2025-10-02 13:12:26.150 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:12:26 np0005466030 nova_compute[230518]: 2025-10-02 13:12:26.151 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 5ecce258-097c-4a5a-9c44-087e8129ceaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:12:26 np0005466030 nova_compute[230518]: 2025-10-02 13:12:26.151 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:12:26 np0005466030 nova_compute[230518]: 2025-10-02 13:12:26.152 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:12:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:26.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:26 np0005466030 nova_compute[230518]: 2025-10-02 13:12:26.206 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:12:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:26.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:12:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1570152893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:12:26 np0005466030 nova_compute[230518]: 2025-10-02 13:12:26.998 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:12:27 np0005466030 nova_compute[230518]: 2025-10-02 13:12:27.006 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:12:27 np0005466030 nova_compute[230518]: 2025-10-02 13:12:27.038 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:12:27 np0005466030 nova_compute[230518]: 2025-10-02 13:12:27.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:12:27 np0005466030 nova_compute[230518]: 2025-10-02 13:12:27.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:12:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:28 np0005466030 nova_compute[230518]: 2025-10-02 13:12:28.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:28 np0005466030 podman[308320]: 2025-10-02 13:12:28.863312891 +0000 UTC m=+0.092432964 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:12:28 np0005466030 podman[308319]: 2025-10-02 13:12:28.906177057 +0000 UTC m=+0.138321535 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:12:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:30 np0005466030 nova_compute[230518]: 2025-10-02 13:12:30.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:32 np0005466030 nova_compute[230518]: 2025-10-02 13:12:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:32 np0005466030 nova_compute[230518]: 2025-10-02 13:12:32.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:32 np0005466030 nova_compute[230518]: 2025-10-02 13:12:32.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:32 np0005466030 nova_compute[230518]: 2025-10-02 13:12:32.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:32 np0005466030 nova_compute[230518]: 2025-10-02 13:12:32.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:32 np0005466030 nova_compute[230518]: 2025-10-02 13:12:32.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:12:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Oct  2 09:12:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:33 np0005466030 nova_compute[230518]: 2025-10-02 13:12:33.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:34.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:34.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.311 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.312 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.312 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.313 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:12:35 np0005466030 nova_compute[230518]: 2025-10-02 13:12:35.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:36.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:37 np0005466030 nova_compute[230518]: 2025-10-02 13:12:37.510 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:12:37 np0005466030 nova_compute[230518]: 2025-10-02 13:12:37.539 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:12:37 np0005466030 nova_compute[230518]: 2025-10-02 13:12:37.540 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:12:37 np0005466030 nova_compute[230518]: 2025-10-02 13:12:37.541 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:37 np0005466030 nova_compute[230518]: 2025-10-02 13:12:37.541 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:38.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:38 np0005466030 nova_compute[230518]: 2025-10-02 13:12:38.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:38.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:38 np0005466030 podman[308366]: 2025-10-02 13:12:38.8240321 +0000 UTC m=+0.074060097 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 09:12:38 np0005466030 podman[308367]: 2025-10-02 13:12:38.883167546 +0000 UTC m=+0.120456623 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd)
Oct  2 09:12:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:40.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:40.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:40 np0005466030 nova_compute[230518]: 2025-10-02 13:12:40.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:42.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:42.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:43 np0005466030 nova_compute[230518]: 2025-10-02 13:12:43.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:43 np0005466030 nova_compute[230518]: 2025-10-02 13:12:43.534 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:12:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:44.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:45 np0005466030 nova_compute[230518]: 2025-10-02 13:12:45.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:46.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:46.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:48.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:48 np0005466030 nova_compute[230518]: 2025-10-02 13:12:48.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:50.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:50.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:50 np0005466030 nova_compute[230518]: 2025-10-02 13:12:50.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:12:51.650 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:12:51 np0005466030 nova_compute[230518]: 2025-10-02 13:12:51.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:12:51.651 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:12:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:52.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:52.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:53 np0005466030 nova_compute[230518]: 2025-10-02 13:12:53.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:12:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:54.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:12:55 np0005466030 nova_compute[230518]: 2025-10-02 13:12:55.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:56.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:12:56.653 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:12:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:12:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:58 np0005466030 nova_compute[230518]: 2025-10-02 13:12:58.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:12:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:12:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:12:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:12:59 np0005466030 podman[308412]: 2025-10-02 13:12:59.855749524 +0000 UTC m=+0.078542686 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:12:59 np0005466030 podman[308411]: 2025-10-02 13:12:59.887690628 +0000 UTC m=+0.119477013 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:13:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:00.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:00.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:00 np0005466030 nova_compute[230518]: 2025-10-02 13:13:00.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:02.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:02.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:03 np0005466030 nova_compute[230518]: 2025-10-02 13:13:03.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:04.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:05 np0005466030 nova_compute[230518]: 2025-10-02 13:13:05.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:13:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:13:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:13:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG nova.compute.manager [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG nova.compute.manager [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing instance network info cache due to event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG oslo_concurrency.lockutils [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG oslo_concurrency.lockutils [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG nova.network.neutron [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:13:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:06.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.261 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.263 2 INFO nova.compute.manager [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Terminating instance#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.264 2 DEBUG nova.compute.manager [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.336 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.338 2 INFO nova.compute.manager [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Terminating instance#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.339 2 DEBUG nova.compute.manager [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:13:06 np0005466030 kernel: tap15cb070c-0f (unregistering): left promiscuous mode
Oct  2 09:13:06 np0005466030 NetworkManager[44960]: <info>  [1759410786.3980] device (tap15cb070c-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:13:06 np0005466030 ovn_controller[129257]: 2025-10-02T13:13:06Z|00840|binding|INFO|Releasing lport 15cb070c-0f52-464f-a2b4-8597c15212e9 from this chassis (sb_readonly=0)
Oct  2 09:13:06 np0005466030 ovn_controller[129257]: 2025-10-02T13:13:06Z|00841|binding|INFO|Setting lport 15cb070c-0f52-464f-a2b4-8597c15212e9 down in Southbound
Oct  2 09:13:06 np0005466030 ovn_controller[129257]: 2025-10-02T13:13:06Z|00842|binding|INFO|Removing iface tap15cb070c-0f ovn-installed in OVS
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.416 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:47:21 10.100.0.3'], port_security=['fa:16:3e:e2:47:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '658821a7-5b97-43ad-8fe2-46e5303cf56c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=15cb070c-0f52-464f-a2b4-8597c15212e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.417 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 15cb070c-0f52-464f-a2b4-8597c15212e9 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis#033[00m
Oct  2 09:13:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.419 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9001b9c-bca6-4085-a954-1414269e31bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:13:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[763800aa-a1a4-481c-95e2-5485ff6be373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.425 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace which is not needed anymore#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:06.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:06 np0005466030 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Oct  2 09:13:06 np0005466030 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c5.scope: Consumed 23.553s CPU time.
Oct  2 09:13:06 np0005466030 kernel: tap9fd04bf3-73 (unregistering): left promiscuous mode
Oct  2 09:13:06 np0005466030 systemd-machined[188247]: Machine qemu-91-instance-000000c5 terminated.
Oct  2 09:13:06 np0005466030 NetworkManager[44960]: <info>  [1759410786.4861] device (tap9fd04bf3-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 ovn_controller[129257]: 2025-10-02T13:13:06Z|00843|binding|INFO|Releasing lport 9fd04bf3-73c2-4224-81ff-32ef5640604b from this chassis (sb_readonly=0)
Oct  2 09:13:06 np0005466030 ovn_controller[129257]: 2025-10-02T13:13:06Z|00844|binding|INFO|Setting lport 9fd04bf3-73c2-4224-81ff-32ef5640604b down in Southbound
Oct  2 09:13:06 np0005466030 ovn_controller[129257]: 2025-10-02T13:13:06Z|00845|binding|INFO|Removing iface tap9fd04bf3-73 ovn-installed in OVS
Oct  2 09:13:06 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.500 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:26:39 10.100.0.12'], port_security=['fa:16:3e:ef:26:39 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5ecce258-097c-4a5a-9c44-087e8129ceaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'caab64a4-2f87-4e39-a0ac-b96f95aae4c5 e9085353-0bf0-4de8-ac60-c4d68c9ff284', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94f92e0-9e2a-42b5-8a3e-79ddfa458897, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9fd04bf3-73c2-4224-81ff-32ef5640604b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Oct  2 09:13:06 np0005466030 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cb.scope: Consumed 18.636s CPU time.
Oct  2 09:13:06 np0005466030 systemd-machined[188247]: Machine qemu-95-instance-000000cb terminated.
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.705 2 INFO nova.virt.libvirt.driver [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance destroyed successfully.#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.706 2 DEBUG nova.objects.instance [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.723 2 DEBUG nova.virt.libvirt.vif [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=197,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-51wwuied',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:09:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=658821a7-5b97-43ad-8fe2-46e5303cf56c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.723 2 DEBUG nova.network.os_vif_util [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.724 2 DEBUG nova.network.os_vif_util [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.725 2 DEBUG os_vif [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15cb070c-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.740 2 INFO os_vif [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f')#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.780 2 INFO nova.virt.libvirt.driver [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance destroyed successfully.#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.781 2 DEBUG nova.objects.instance [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 5ecce258-097c-4a5a-9c44-087e8129ceaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:06 np0005466030 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : haproxy version is 2.8.14-c23fe91
Oct  2 09:13:06 np0005466030 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : path to executable is /usr/sbin/haproxy
Oct  2 09:13:06 np0005466030 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [WARNING]  (304773) : Exiting Master process...
Oct  2 09:13:06 np0005466030 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [ALERT]    (304773) : Current worker (304775) exited with code 143 (Terminated)
Oct  2 09:13:06 np0005466030 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [WARNING]  (304773) : All workers exited. Exiting... (0)
Oct  2 09:13:06 np0005466030 systemd[1]: libpod-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003.scope: Deactivated successfully.
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.815 2 DEBUG nova.virt.libvirt.vif [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=203,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:11:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-i85j97wy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:11:39Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=5ecce258-097c-4a5a-9c44-087e8129ceaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.815 2 DEBUG nova.network.os_vif_util [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:13:06 np0005466030 podman[308623]: 2025-10-02 13:13:06.817241032 +0000 UTC m=+0.236908670 container died c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.817 2 DEBUG nova.network.os_vif_util [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.819 2 DEBUG os_vif [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fd04bf3-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:06 np0005466030 nova_compute[230518]: 2025-10-02 13:13:06.829 2 INFO os_vif [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73')#033[00m
Oct  2 09:13:07 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003-userdata-shm.mount: Deactivated successfully.
Oct  2 09:13:07 np0005466030 systemd[1]: var-lib-containers-storage-overlay-410f773b118732b26b6feca850b0977a2c84aeb1020cb4d6bcef409aa2a24707-merged.mount: Deactivated successfully.
Oct  2 09:13:07 np0005466030 podman[308623]: 2025-10-02 13:13:07.518526193 +0000 UTC m=+0.938193831 container cleanup c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:13:07 np0005466030 systemd[1]: libpod-conmon-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003.scope: Deactivated successfully.
Oct  2 09:13:07 np0005466030 podman[308740]: 2025-10-02 13:13:07.8057014 +0000 UTC m=+0.463191265 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 09:13:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.419 2 DEBUG nova.compute.manager [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-unplugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.421 2 DEBUG oslo_concurrency.lockutils [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.421 2 DEBUG oslo_concurrency.lockutils [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.421 2 DEBUG oslo_concurrency.lockutils [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.422 2 DEBUG nova.compute.manager [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] No waiting events found dispatching network-vif-unplugged-9fd04bf3-73c2-4224-81ff-32ef5640604b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.422 2 DEBUG nova.compute.manager [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-unplugged-9fd04bf3-73c2-4224-81ff-32ef5640604b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.453 2 DEBUG nova.compute.manager [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-unplugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.453 2 DEBUG oslo_concurrency.lockutils [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG oslo_concurrency.lockutils [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG oslo_concurrency.lockutils [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG nova.compute.manager [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] No waiting events found dispatching network-vif-unplugged-15cb070c-0f52-464f-a2b4-8597c15212e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG nova.compute.manager [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-unplugged-15cb070c-0f52-464f-a2b4-8597c15212e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:13:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:08 np0005466030 podman[308753]: 2025-10-02 13:13:08.470587668 +0000 UTC m=+0.902455268 container remove c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:13:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.480 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92faccf8-cca9-43e6-8e4b-7b7e2d921426]: (4, ('Thu Oct  2 01:13:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003)\nc27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003\nThu Oct  2 01:13:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003)\nc27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.484 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[019a1d4c-5645-4803-baa1-487aea6a0fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 kernel: tapd9001b9c-b0: left promiscuous mode
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.486 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:08 np0005466030 nova_compute[230518]: 2025-10-02 13:13:08.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.515 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8eee8018-2779-4b6f-96be-da0d71127d78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 podman[308740]: 2025-10-02 13:13:08.538602084 +0000 UTC m=+1.196091869 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3566eff-0cdd-4844-a4dd-caa49b3b4ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.561 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3330ae15-8fa3-4240-9796-857feccc6295]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.590 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ca78f58c-6969-4a8b-97da-adde91e41533]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840851, 'reachable_time': 31347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308790, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 systemd[1]: run-netns-ovnmeta\x2dd9001b9c\x2dbca6\x2d4085\x2da954\x2d1414269e31bc.mount: Deactivated successfully.
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.601 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.602 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b444c8-27a5-4d69-a84d-19b4a0a6f8e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.604 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd04bf3-73c2-4224-81ff-32ef5640604b in datapath dac20349-4f21-4aeb-a4a7-d775590cb44a unbound from our chassis#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.605 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dac20349-4f21-4aeb-a4a7-d775590cb44a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.606 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[088ac60a-4959-4e61-aa5f-0ce69ea625e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.607 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a namespace which is not needed anymore#033[00m
Oct  2 09:13:09 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : haproxy version is 2.8.14-c23fe91
Oct  2 09:13:09 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : path to executable is /usr/sbin/haproxy
Oct  2 09:13:09 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [WARNING]  (307994) : Exiting Master process...
Oct  2 09:13:09 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [WARNING]  (307994) : Exiting Master process...
Oct  2 09:13:09 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [ALERT]    (307994) : Current worker (307997) exited with code 143 (Terminated)
Oct  2 09:13:09 np0005466030 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [WARNING]  (307994) : All workers exited. Exiting... (0)
Oct  2 09:13:09 np0005466030 podman[308808]: 2025-10-02 13:13:09.164161367 +0000 UTC m=+0.176554095 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:13:09 np0005466030 systemd[1]: libpod-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8.scope: Deactivated successfully.
Oct  2 09:13:09 np0005466030 podman[308811]: 2025-10-02 13:13:09.169386871 +0000 UTC m=+0.170558226 container died 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:13:09 np0005466030 podman[308809]: 2025-10-02 13:13:09.406311421 +0000 UTC m=+0.406254748 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:13:09 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8-userdata-shm.mount: Deactivated successfully.
Oct  2 09:13:09 np0005466030 systemd[1]: var-lib-containers-storage-overlay-f39bcf2783cfa7733fec5ee8c9f0cd2433751285a50f27407623ad86aee7b446-merged.mount: Deactivated successfully.
Oct  2 09:13:09 np0005466030 podman[308811]: 2025-10-02 13:13:09.703539744 +0000 UTC m=+0.704711099 container cleanup 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 09:13:09 np0005466030 systemd[1]: libpod-conmon-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8.scope: Deactivated successfully.
Oct  2 09:13:09 np0005466030 nova_compute[230518]: 2025-10-02 13:13:09.933 2 DEBUG nova.network.neutron [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updated VIF entry in instance network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:13:09 np0005466030 nova_compute[230518]: 2025-10-02 13:13:09.934 2 DEBUG nova.network.neutron [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:09 np0005466030 nova_compute[230518]: 2025-10-02 13:13:09.950 2 DEBUG oslo_concurrency.lockutils [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:13:10 np0005466030 podman[308909]: 2025-10-02 13:13:10.037935503 +0000 UTC m=+0.296630504 container remove 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.046 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45cd89d0-6156-4c3f-9443-d8becf252ed3]: (4, ('Thu Oct  2 01:13:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a (1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8)\n1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8\nThu Oct  2 01:13:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a (1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8)\n1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.048 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[37d6603f-f501-48ff-b15b-f525021d943d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.049 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac20349-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:10 np0005466030 kernel: tapdac20349-40: left promiscuous mode
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.061 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81da0f3a-c823-445a-aa74-d71c77b5e5fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.091 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19a24845-ba57-41ae-86a9-c9ebc03d3c07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.094 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd369f6e-e270-464b-945a-235f16112cde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.121 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc76cf1-8e3c-4d54-ade2-e628ad51ef73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855962, 'reachable_time': 43297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308946, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.125 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:13:10 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.125 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[37f3cba8-e23c-4d37-85be-81a7ceba34c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:13:10 np0005466030 systemd[1]: run-netns-ovnmeta\x2ddac20349\x2d4f21\x2d4aeb\x2da4a7\x2dd775590cb44a.mount: Deactivated successfully.
Oct  2 09:13:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:10.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.501 2 DEBUG nova.compute.manager [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG oslo_concurrency.lockutils [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG oslo_concurrency.lockutils [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG oslo_concurrency.lockutils [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG nova.compute.manager [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] No waiting events found dispatching network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.503 2 WARNING nova.compute.manager [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received unexpected event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.552 2 DEBUG nova.compute.manager [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.553 2 DEBUG oslo_concurrency.lockutils [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.555 2 DEBUG oslo_concurrency.lockutils [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.555 2 DEBUG oslo_concurrency.lockutils [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.556 2 DEBUG nova.compute.manager [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] No waiting events found dispatching network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.556 2 WARNING nova.compute.manager [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received unexpected event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:13:10 np0005466030 nova_compute[230518]: 2025-10-02 13:13:10.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.405 2 INFO nova.virt.libvirt.driver [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deleting instance files /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c_del#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.407 2 INFO nova.virt.libvirt.driver [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deletion of /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c_del complete#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.418 2 INFO nova.virt.libvirt.driver [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deleting instance files /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf_del#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.419 2 INFO nova.virt.libvirt.driver [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deletion of /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf_del complete#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.529 2 INFO nova.compute.manager [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 5.19 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.530 2 DEBUG oslo.service.loopingcall [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.531 2 DEBUG nova.compute.manager [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.531 2 DEBUG nova.network.neutron [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.538 2 INFO nova.compute.manager [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 5.27 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.539 2 DEBUG oslo.service.loopingcall [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.539 2 DEBUG nova.compute.manager [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.540 2 DEBUG nova.network.neutron [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:13:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:11 np0005466030 nova_compute[230518]: 2025-10-02 13:13:11.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:12.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:12.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:13:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:13:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.711 2 DEBUG nova.network.neutron [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.737 2 DEBUG nova.network.neutron [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.741 2 INFO nova.compute.manager [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 2.21 seconds to deallocate network for instance.#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.774 2 INFO nova.compute.manager [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 2.23 seconds to deallocate network for instance.#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.805 2 DEBUG nova.compute.manager [req-0aef10e3-9bc8-4adf-8d29-f107013093bb req-42effe3e-aaf8-43aa-93ff-941047c81b93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-deleted-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.841 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.841 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.861 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:13 np0005466030 nova_compute[230518]: 2025-10-02 13:13:13.900 2 DEBUG oslo_concurrency.processutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:14.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1903692744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.443 2 DEBUG oslo_concurrency.processutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.452 2 DEBUG nova.compute.provider_tree [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.509 2 DEBUG nova.scheduler.client.report [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.541 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.546 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.580 2 INFO nova.scheduler.client.report [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 5ecce258-097c-4a5a-9c44-087e8129ceaf#033[00m
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.597 2 DEBUG oslo_concurrency.processutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:14 np0005466030 nova_compute[230518]: 2025-10-02 13:13:14.642 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:15 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.072 2 DEBUG oslo_concurrency.processutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:15 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.083 2 DEBUG nova.compute.provider_tree [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:15 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.111 2 DEBUG nova.scheduler.client.report [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:15 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.133 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:15 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.158 2 INFO nova.scheduler.client.report [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocations for instance 658821a7-5b97-43ad-8fe2-46e5303cf56c#033[00m
Oct  2 09:13:15 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:15 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:16 np0005466030 nova_compute[230518]: 2025-10-02 13:13:15.999 2 DEBUG nova.compute.manager [req-bd5cbfcb-e5d3-4cdf-a7c5-3f171604b3f7 req-75908910-649d-4d7c-a512-6858af7c2528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-deleted-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:16.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:16 np0005466030 nova_compute[230518]: 2025-10-02 13:13:16.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:18.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:19 np0005466030 nova_compute[230518]: 2025-10-02 13:13:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:19 np0005466030 nova_compute[230518]: 2025-10-02 13:13:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:13:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:20.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:20 np0005466030 nova_compute[230518]: 2025-10-02 13:13:20.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:21 np0005466030 nova_compute[230518]: 2025-10-02 13:13:21.704 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410786.7021995, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:21 np0005466030 nova_compute[230518]: 2025-10-02 13:13:21.705 2 INFO nova.compute.manager [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:13:21 np0005466030 nova_compute[230518]: 2025-10-02 13:13:21.729 2 DEBUG nova.compute.manager [None req-fb7e082b-0002-4d10-9a78-76278d269e87 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:21 np0005466030 nova_compute[230518]: 2025-10-02 13:13:21.777 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410786.773232, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:13:21 np0005466030 nova_compute[230518]: 2025-10-02 13:13:21.778 2 INFO nova.compute.manager [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:13:21 np0005466030 nova_compute[230518]: 2025-10-02 13:13:21.819 2 DEBUG nova.compute.manager [None req-3ac42c07-6608-477e-9149-f10f6ba9437e - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:13:21 np0005466030 nova_compute[230518]: 2025-10-02 13:13:21.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:22.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:24.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:25 np0005466030 nova_compute[230518]: 2025-10-02 13:13:25.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:25.968 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:26 np0005466030 nova_compute[230518]: 2025-10-02 13:13:26.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/57139510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.579 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.848 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4146MB free_disk=20.942596435546875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.850 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.851 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.941 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.941 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:13:27 np0005466030 nova_compute[230518]: 2025-10-02 13:13:27.961 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1871557515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:28 np0005466030 nova_compute[230518]: 2025-10-02 13:13:28.503 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:28 np0005466030 nova_compute[230518]: 2025-10-02 13:13:28.515 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:28 np0005466030 nova_compute[230518]: 2025-10-02 13:13:28.544 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:28 np0005466030 nova_compute[230518]: 2025-10-02 13:13:28.586 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:13:28 np0005466030 nova_compute[230518]: 2025-10-02 13:13:28.587 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:30.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:30 np0005466030 podman[309258]: 2025-10-02 13:13:30.835754972 +0000 UTC m=+0.075394028 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:13:30 np0005466030 podman[309257]: 2025-10-02 13:13:30.900763494 +0000 UTC m=+0.140637348 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 09:13:30 np0005466030 nova_compute[230518]: 2025-10-02 13:13:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:31 np0005466030 nova_compute[230518]: 2025-10-02 13:13:31.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:33 np0005466030 nova_compute[230518]: 2025-10-02 13:13:33.582 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:33 np0005466030 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:33 np0005466030 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:33 np0005466030 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:33 np0005466030 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:13:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Oct  2 09:13:34 np0005466030 nova_compute[230518]: 2025-10-02 13:13:34.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:34.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:35 np0005466030 nova_compute[230518]: 2025-10-02 13:13:35.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:36.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:36 np0005466030 nova_compute[230518]: 2025-10-02 13:13:36.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:37 np0005466030 nova_compute[230518]: 2025-10-02 13:13:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:37 np0005466030 nova_compute[230518]: 2025-10-02 13:13:37.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:13:37 np0005466030 nova_compute[230518]: 2025-10-02 13:13:37.091 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:13:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Oct  2 09:13:38 np0005466030 nova_compute[230518]: 2025-10-02 13:13:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:38.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:38.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:39 np0005466030 nova_compute[230518]: 2025-10-02 13:13:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:13:39 np0005466030 podman[309301]: 2025-10-02 13:13:39.826855345 +0000 UTC m=+0.067883802 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:13:39 np0005466030 podman[309302]: 2025-10-02 13:13:39.839953497 +0000 UTC m=+0.075222284 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:13:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Oct  2 09:13:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:40.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:40 np0005466030 nova_compute[230518]: 2025-10-02 13:13:40.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Oct  2 09:13:41 np0005466030 nova_compute[230518]: 2025-10-02 13:13:41.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:42.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:44.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:45 np0005466030 nova_compute[230518]: 2025-10-02 13:13:45.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:46.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:46 np0005466030 nova_compute[230518]: 2025-10-02 13:13:46.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:48.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.745411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828745451, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2443, "num_deletes": 255, "total_data_size": 5694904, "memory_usage": 5781024, "flush_reason": "Manual Compaction"}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828770483, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3723786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71279, "largest_seqno": 73717, "table_properties": {"data_size": 3713812, "index_size": 6339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21332, "raw_average_key_size": 20, "raw_value_size": 3693614, "raw_average_value_size": 3614, "num_data_blocks": 274, "num_entries": 1022, "num_filter_entries": 1022, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410630, "oldest_key_time": 1759410630, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 25174 microseconds, and 13544 cpu microseconds.
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.770568) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3723786 bytes OK
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.770616) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.772864) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.772891) EVENT_LOG_v1 {"time_micros": 1759410828772882, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.772923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5683980, prev total WAL file size 5683980, number of live WAL files 2.
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.776111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3636KB)], [147(10054KB)]
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828776192, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14019569, "oldest_snapshot_seqno": -1}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 9540 keys, 12075399 bytes, temperature: kUnknown
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828876994, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12075399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12014051, "index_size": 36403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 251203, "raw_average_key_size": 26, "raw_value_size": 11847196, "raw_average_value_size": 1241, "num_data_blocks": 1385, "num_entries": 9540, "num_filter_entries": 9540, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.877435) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12075399 bytes
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.878812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.9 rd, 119.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 10070, records dropped: 530 output_compression: NoCompression
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.878833) EVENT_LOG_v1 {"time_micros": 1759410828878822, "job": 94, "event": "compaction_finished", "compaction_time_micros": 100935, "compaction_time_cpu_micros": 55521, "output_level": 6, "num_output_files": 1, "total_output_size": 12075399, "num_input_records": 10070, "num_output_records": 9540, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828880087, "job": 94, "event": "table_file_deletion", "file_number": 149}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828882643, "job": 94, "event": "table_file_deletion", "file_number": 147}
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.775928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:48 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:13:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:50 np0005466030 nova_compute[230518]: 2025-10-02 13:13:50.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Oct  2 09:13:51 np0005466030 nova_compute[230518]: 2025-10-02 13:13:51.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:52.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:54.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:54.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.727 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.728 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.743 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:13:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:54.771 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:13:54 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:54.772 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.857 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.857 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.864 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.864 2 INFO nova.compute.claims [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:13:54 np0005466030 nova_compute[230518]: 2025-10-02 13:13:54.989 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:13:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4156363450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.620 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.626 2 DEBUG nova.compute.provider_tree [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.673 2 DEBUG nova.scheduler.client.report [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.850 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.851 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.898 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.898 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.928 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:13:55 np0005466030 nova_compute[230518]: 2025-10-02 13:13:55.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.007 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.111 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.114 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.114 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Creating image(s)#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.166 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.215 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.253 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.259 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:13:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.362 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.364 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.365 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.366 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.418 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.426 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.493 2 DEBUG nova.policy [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37083e5fd56c447cb409b86d6394dd43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f5376733aec4630998da8d11db76561', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:13:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:56 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:13:56.774 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:13:56 np0005466030 nova_compute[230518]: 2025-10-02 13:13:56.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.376 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.950s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.490 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] resizing rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.892 2 DEBUG nova.objects.instance [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'migration_context' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.911 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.912 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Ensure instance console log exists: /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.913 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.913 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.914 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:13:57 np0005466030 nova_compute[230518]: 2025-10-02 13:13:57.925 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Successfully created port: 27cf437c-6f1f-4511-8b2a-3d68dd116906 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:13:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:13:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:13:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:13:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:13:58 np0005466030 nova_compute[230518]: 2025-10-02 13:13:58.838 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Successfully updated port: 27cf437c-6f1f-4511-8b2a-3d68dd116906 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:13:58 np0005466030 nova_compute[230518]: 2025-10-02 13:13:58.855 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:58 np0005466030 nova_compute[230518]: 2025-10-02 13:13:58.855 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:58 np0005466030 nova_compute[230518]: 2025-10-02 13:13:58.855 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:13:58 np0005466030 nova_compute[230518]: 2025-10-02 13:13:58.935 2 DEBUG nova.compute.manager [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:13:58 np0005466030 nova_compute[230518]: 2025-10-02 13:13:58.936 2 DEBUG nova.compute.manager [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing instance network info cache due to event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:13:58 np0005466030 nova_compute[230518]: 2025-10-02 13:13:58.936 2 DEBUG oslo_concurrency.lockutils [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.022 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.906 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.921 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.921 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance network_info: |[{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.922 2 DEBUG oslo_concurrency.lockutils [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.922 2 DEBUG nova.network.neutron [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.927 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start _get_guest_xml network_info=[{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.936 2 WARNING nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.948 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.949 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.956 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.957 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.959 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.959 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.960 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.961 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.961 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.962 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.962 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.963 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.963 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.964 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.964 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.965 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:13:59 np0005466030 nova_compute[230518]: 2025-10-02 13:13:59.970 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992521573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3938846710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:00 np0005466030 nova_compute[230518]: 2025-10-02 13:14:00.471 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:00 np0005466030 nova_compute[230518]: 2025-10-02 13:14:00.522 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:00 np0005466030 nova_compute[230518]: 2025-10-02 13:14:00.528 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Oct  2 09:14:00 np0005466030 nova_compute[230518]: 2025-10-02 13:14:00.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3755106332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.027 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.030 2 DEBUG nova.virt.libvirt.vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1054588037',display_name='tempest-AttachVolumeNegativeTest-server-1054588037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1054588037',id=208,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQBraTImbOHfTH+zcfBRFJyePeIqcOFlGsPR6ZMRcMYMVZGuN9g/lIgLTbs1qdUo4qDQMWoBvweu9Ok7nksgXVqglFfrHDG04CgWRfT+7Tk6OyYqf+SJMw2cYyCygZmlA==',key_name='tempest-keypair-212526163',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-tfow0fvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=3dbb48be-2da9-48eb-814a-94eac9968d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.031 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.032 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.035 2 DEBUG nova.objects.instance [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.065 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <uuid>3dbb48be-2da9-48eb-814a-94eac9968d0f</uuid>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <name>instance-000000d0</name>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1054588037</nova:name>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:13:59</nova:creationTime>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:user uuid="37083e5fd56c447cb409b86d6394dd43">tempest-AttachVolumeNegativeTest-1084646737-project-member</nova:user>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:project uuid="7f5376733aec4630998da8d11db76561">tempest-AttachVolumeNegativeTest-1084646737</nova:project>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <nova:port uuid="27cf437c-6f1f-4511-8b2a-3d68dd116906">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <entry name="serial">3dbb48be-2da9-48eb-814a-94eac9968d0f</entry>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <entry name="uuid">3dbb48be-2da9-48eb-814a-94eac9968d0f</entry>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3dbb48be-2da9-48eb-814a-94eac9968d0f_disk">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:38:cb:e4"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <target dev="tap27cf437c-6f"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/console.log" append="off"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:14:01 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:14:01 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:14:01 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:14:01 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.068 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Preparing to wait for external event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.069 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.069 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.070 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.071 2 DEBUG nova.virt.libvirt.vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1054588037',display_name='tempest-AttachVolumeNegativeTest-server-1054588037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1054588037',id=208,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQBraTImbOHfTH+zcfBRFJyePeIqcOFlGsPR6ZMRcMYMVZGuN9g/lIgLTbs1qdUo4qDQMWoBvweu9Ok7nksgXVqglFfrHDG04CgWRfT+7Tk6OyYqf+SJMw2cYyCygZmlA==',key_name='tempest-keypair-212526163',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-tfow0fvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=3dbb48be-2da9-48eb-814a-94eac9968d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.072 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.073 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.074 2 DEBUG os_vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27cf437c-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27cf437c-6f, col_values=(('external_ids', {'iface-id': '27cf437c-6f1f-4511-8b2a-3d68dd116906', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:cb:e4', 'vm-uuid': '3dbb48be-2da9-48eb-814a-94eac9968d0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:01 np0005466030 NetworkManager[44960]: <info>  [1759410841.0849] manager: (tap27cf437c-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.094 2 INFO os_vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f')#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.153 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.154 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.154 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:38:cb:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.155 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Using config drive#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.201 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.526 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Creating config drive at /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.533 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd9yqxl8y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.692 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd9yqxl8y" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.736 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.741 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.786 2 DEBUG nova.network.neutron [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updated VIF entry in instance network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.787 2 DEBUG nova.network.neutron [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:01 np0005466030 nova_compute[230518]: 2025-10-02 13:14:01.819 2 DEBUG oslo_concurrency.lockutils [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:01 np0005466030 podman[309635]: 2025-10-02 13:14:01.821116032 +0000 UTC m=+0.058147958 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:14:01 np0005466030 podman[309631]: 2025-10-02 13:14:01.852226148 +0000 UTC m=+0.090833103 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.015 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.016 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deleting local config drive /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config because it was imported into RBD.#033[00m
Oct  2 09:14:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Oct  2 09:14:02 np0005466030 kernel: tap27cf437c-6f: entered promiscuous mode
Oct  2 09:14:02 np0005466030 NetworkManager[44960]: <info>  [1759410842.0834] manager: (tap27cf437c-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Oct  2 09:14:02 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:02Z|00846|binding|INFO|Claiming lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 for this chassis.
Oct  2 09:14:02 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:02Z|00847|binding|INFO|27cf437c-6f1f-4511-8b2a-3d68dd116906: Claiming fa:16:3e:38:cb:e4 10.100.0.7
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.101 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:cb:e4 10.100.0.7'], port_security=['fa:16:3e:38:cb:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3dbb48be-2da9-48eb-814a-94eac9968d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5299c659-7804-482f-bd2a-becd049c9d51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=27cf437c-6f1f-4511-8b2a-3d68dd116906) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.103 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 27cf437c-6f1f-4511-8b2a-3d68dd116906 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 bound to our chassis#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.107 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9#033[00m
Oct  2 09:14:02 np0005466030 systemd-machined[188247]: New machine qemu-96-instance-000000d0.
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.120 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d21a3689-79bc-45ac-b209-d331ea63813f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.122 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc02aa54-d1 in ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.123 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc02aa54-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.124 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[894710bb-911c-473b-b248-341226110ed5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.125 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76088bb0-8a18-4203-a7e4-2bf2c9d776e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 systemd[1]: Started Virtual Machine qemu-96-instance-000000d0.
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.137 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[854e6019-f1c5-4300-bff3-85c1cbeed4c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.164 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11300918-4917-4365-ab2f-73eaf8bda82c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 systemd-udevd[309711]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:14:02 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:02Z|00848|binding|INFO|Setting lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 ovn-installed in OVS
Oct  2 09:14:02 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:02Z|00849|binding|INFO|Setting lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 up in Southbound
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:02 np0005466030 NetworkManager[44960]: <info>  [1759410842.1837] device (tap27cf437c-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:14:02 np0005466030 NetworkManager[44960]: <info>  [1759410842.1846] device (tap27cf437c-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.208 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5086be-af8a-4d15-b120-dea047c42a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 NetworkManager[44960]: <info>  [1759410842.2152] manager: (tapbc02aa54-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Oct  2 09:14:02 np0005466030 systemd-udevd[309717]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.216 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[778f30f5-57fd-4892-a419-55f4d90c5238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.276 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3d4188-6500-4668-9ef5-013835b9c125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.282 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e56404db-317e-4d9f-90b8-b66b246a479a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 NetworkManager[44960]: <info>  [1759410842.3137] device (tapbc02aa54-d0): carrier: link connected
Oct  2 09:14:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:02.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.327 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2a172e14-bbf2-45c8-a682-500ac7c62803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9461db0a-d8e5-4b42-9dc6-f330def8ba89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 23381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309740, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.371 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dea145e6-0f5b-4253-8564-cd8a18146197]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:fc0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870587, 'tstamp': 870587}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309741, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.397 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d84685e-cf6a-4d1e-b10b-45a235d1a689]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 23381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309742, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.402 2 DEBUG nova.compute.manager [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.402 2 DEBUG oslo_concurrency.lockutils [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.402 2 DEBUG oslo_concurrency.lockutils [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.403 2 DEBUG oslo_concurrency.lockutils [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.403 2 DEBUG nova.compute.manager [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Processing event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.446 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[611211f6-5510-46dc-bf19-1cc78f115c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.528 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[09666524-4a69-4d5b-a125-707c32eac871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.529 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:02 np0005466030 kernel: tapbc02aa54-d0: entered promiscuous mode
Oct  2 09:14:02 np0005466030 NetworkManager[44960]: <info>  [1759410842.5343] manager: (tapbc02aa54-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.539 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:02 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:02Z|00850|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:14:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:02 np0005466030 nova_compute[230518]: 2025-10-02 13:14:02.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.566 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.567 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[940c54a0-ab8b-4483-8447-1469fb71e2e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.568 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:14:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.568 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'env', 'PROCESS_TAG=haproxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc02aa54-d19f-4274-8d92-cbabe7917dd9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:14:03 np0005466030 podman[309816]: 2025-10-02 13:14:02.928272627 +0000 UTC m=+0.027542006 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:14:03 np0005466030 podman[309816]: 2025-10-02 13:14:03.14971728 +0000 UTC m=+0.248986579 container create 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:14:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.235 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410843.2347429, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.235 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Started (Lifecycle Event)#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.237 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.241 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.246 2 INFO nova.virt.libvirt.driver [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance spawned successfully.#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.246 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:14:03 np0005466030 systemd[1]: Started libpod-conmon-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a.scope.
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.253 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.257 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.267 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.268 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.268 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.268 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.269 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.269 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.272 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.272 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410843.2349257, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.272 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:14:03 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:14:03 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/500922d7592b87d0de794352ea2980ab9aa55fc51cd3a81b2cc7670108d2967e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.301 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.306 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410843.2407436, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.306 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.322 2 INFO nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 7.21 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.323 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.327 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.333 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:14:03 np0005466030 podman[309816]: 2025-10-02 13:14:03.355041019 +0000 UTC m=+0.454310328 container init 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.364 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:14:03 np0005466030 podman[309816]: 2025-10-02 13:14:03.366298672 +0000 UTC m=+0.465567961 container start 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.388 2 INFO nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 8.56 seconds to build instance.#033[00m
Oct  2 09:14:03 np0005466030 nova_compute[230518]: 2025-10-02 13:14:03.404 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:03 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : New worker (309836) forked
Oct  2 09:14:03 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : Loading success.
Oct  2 09:14:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:04 np0005466030 nova_compute[230518]: 2025-10-02 13:14:04.510 2 DEBUG nova.compute.manager [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:04 np0005466030 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG oslo_concurrency.lockutils [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:04 np0005466030 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG oslo_concurrency.lockutils [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:04 np0005466030 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG oslo_concurrency.lockutils [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:04 np0005466030 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG nova.compute.manager [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] No waiting events found dispatching network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:14:04 np0005466030 nova_compute[230518]: 2025-10-02 13:14:04.511 2 WARNING nova.compute.manager [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received unexpected event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:14:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Oct  2 09:14:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:14:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:14:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:14:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:14:05 np0005466030 NetworkManager[44960]: <info>  [1759410845.4132] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Oct  2 09:14:05 np0005466030 NetworkManager[44960]: <info>  [1759410845.4144] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:05 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:05Z|00851|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.655 2 DEBUG nova.compute.manager [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.655 2 DEBUG nova.compute.manager [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing instance network info cache due to event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.656 2 DEBUG oslo_concurrency.lockutils [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.656 2 DEBUG oslo_concurrency.lockutils [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:05 np0005466030 nova_compute[230518]: 2025-10-02 13:14:05.656 2 DEBUG nova.network.neutron [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:14:06 np0005466030 nova_compute[230518]: 2025-10-02 13:14:06.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:06 np0005466030 nova_compute[230518]: 2025-10-02 13:14:06.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:06.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:06.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:06 np0005466030 nova_compute[230518]: 2025-10-02 13:14:06.705 2 DEBUG nova.network.neutron [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updated VIF entry in instance network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:14:06 np0005466030 nova_compute[230518]: 2025-10-02 13:14:06.705 2 DEBUG nova.network.neutron [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:06 np0005466030 nova_compute[230518]: 2025-10-02 13:14:06.746 2 DEBUG oslo_concurrency.lockutils [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268729036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:08.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Oct  2 09:14:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:10 np0005466030 podman[309847]: 2025-10-02 13:14:10.843936239 +0000 UTC m=+0.088422098 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:14:10 np0005466030 podman[309846]: 2025-10-02 13:14:10.879863547 +0000 UTC m=+0.124354186 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:14:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Oct  2 09:14:11 np0005466030 nova_compute[230518]: 2025-10-02 13:14:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:11 np0005466030 nova_compute[230518]: 2025-10-02 13:14:11.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:12.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:14.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:16 np0005466030 nova_compute[230518]: 2025-10-02 13:14:16.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:16 np0005466030 nova_compute[230518]: 2025-10-02 13:14:16.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:17 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:17Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:cb:e4 10.100.0.7
Oct  2 09:14:17 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:17Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:cb:e4 10.100.0.7
Oct  2 09:14:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:18.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:20.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:20.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:21 np0005466030 nova_compute[230518]: 2025-10-02 13:14:21.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:21 np0005466030 nova_compute[230518]: 2025-10-02 13:14:21.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:14:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:14:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:14:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:22.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:22.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:24.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:24.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:26 np0005466030 nova_compute[230518]: 2025-10-02 13:14:26.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:26 np0005466030 nova_compute[230518]: 2025-10-02 13:14:26.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:26.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/577872724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.784 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.887 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:14:27 np0005466030 nova_compute[230518]: 2025-10-02 13:14:27.887 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.057 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.059 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3987MB free_disk=20.888782501220703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.059 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.060 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.259 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3dbb48be-2da9-48eb-814a-94eac9968d0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.259 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.260 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:14:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:28.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.475 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:28.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3626442609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.966 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:28 np0005466030 nova_compute[230518]: 2025-10-02 13:14:28.974 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:14:29 np0005466030 nova_compute[230518]: 2025-10-02 13:14:29.007 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:14:29 np0005466030 nova_compute[230518]: 2025-10-02 13:14:29.048 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:14:29 np0005466030 nova_compute[230518]: 2025-10-02 13:14:29.049 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:30.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:30.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:31 np0005466030 nova_compute[230518]: 2025-10-02 13:14:31.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:31 np0005466030 nova_compute[230518]: 2025-10-02 13:14:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:32.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:32.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:32 np0005466030 podman[310064]: 2025-10-02 13:14:32.859234013 +0000 UTC m=+0.098512935 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  2 09:14:32 np0005466030 podman[310063]: 2025-10-02 13:14:32.888378317 +0000 UTC m=+0.124214621 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:14:33 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:14:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/725074288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:34.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:34.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.045 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.046 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.046 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.046 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.047 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:36 np0005466030 nova_compute[230518]: 2025-10-02 13:14:36.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:36.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:37 np0005466030 nova_compute[230518]: 2025-10-02 13:14:37.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:37 np0005466030 nova_compute[230518]: 2025-10-02 13:14:37.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:14:37 np0005466030 nova_compute[230518]: 2025-10-02 13:14:37.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:14:37 np0005466030 nova_compute[230518]: 2025-10-02 13:14:37.388 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:37 np0005466030 nova_compute[230518]: 2025-10-02 13:14:37.388 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:37 np0005466030 nova_compute[230518]: 2025-10-02 13:14:37.389 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:14:37 np0005466030 nova_compute[230518]: 2025-10-02 13:14:37.389 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:14:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:14:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:38.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.131 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.162 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.163 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.164 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.740 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.741 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.770 2 DEBUG nova.objects.instance [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:39 np0005466030 nova_compute[230518]: 2025-10-02 13:14:39.860 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.207 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.207 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.208 2 INFO nova.compute.manager [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attaching volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2 to /dev/vdb#033[00m
Oct  2 09:14:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:40.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.468 2 DEBUG os_brick.utils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.471 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.493 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.493 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d43b14-ca14-46e7-a47c-5e785ef9c325]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.496 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.512 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.512 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5c672bc7-d902-4e93-a8c6-be2c1313e6e8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.515 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.530 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.530 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[8eabb950-e642-4727-a73c-33a1cb130af5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.533 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6586c1-b205-41f1-bfca-1146e5829488]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.533 2 DEBUG oslo_concurrency.processutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.589 2 DEBUG oslo_concurrency.processutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "nvme version" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.593 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.593 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.593 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.594 2 DEBUG os_brick.utils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] <== get_connector_properties: return (125ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:14:40 np0005466030 nova_compute[230518]: 2025-10-02 13:14:40.595 2 DEBUG nova.virt.block_device [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating existing volume attachment record: 63eeceae-5e4a-4e89-b426-2b83a872ab02 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:14:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:40.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573521355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.518 2 DEBUG nova.objects.instance [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.552 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attempting to attach volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.557 2 DEBUG nova.virt.libvirt.guest [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:14:41 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-205d78a9-2344-4c93-8e1b-54d92d0b0fa2">
Oct  2 09:14:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 09:14:41 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:  </auth>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:14:41 np0005466030 nova_compute[230518]:  <serial>205d78a9-2344-4c93-8e1b-54d92d0b0fa2</serial>
Oct  2 09:14:41 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:14:41 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:14:41 np0005466030 podman[310186]: 2025-10-02 13:14:41.841360692 +0000 UTC m=+0.081471550 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Oct  2 09:14:41 np0005466030 podman[310185]: 2025-10-02 13:14:41.850472417 +0000 UTC m=+0.094017753 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.962 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.963 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.964 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:41 np0005466030 nova_compute[230518]: 2025-10-02 13:14:41.964 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:38:cb:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:14:42 np0005466030 nova_compute[230518]: 2025-10-02 13:14:42.273 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:42.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:42.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:44.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:44.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:46 np0005466030 nova_compute[230518]: 2025-10-02 13:14:46.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:46 np0005466030 nova_compute[230518]: 2025-10-02 13:14:46.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:46.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:46.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:48.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:48.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:50.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.567 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.567 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.589 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:14:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:50.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.666 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.667 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.677 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.677 2 INFO nova.compute.claims [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:14:50 np0005466030 nova_compute[230518]: 2025-10-02 13:14:50.884 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:14:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1591108121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.494 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.501 2 DEBUG nova.compute.provider_tree [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.519 2 DEBUG nova.scheduler.client.report [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.555 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.555 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.670 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.670 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.688 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.710 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.851 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.852 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.853 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Creating image(s)#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.878 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.905 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.939 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.943 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:51 np0005466030 nova_compute[230518]: 2025-10-02 13:14:51.981 2 DEBUG nova.policy [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37083e5fd56c447cb409b86d6394dd43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f5376733aec4630998da8d11db76561', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.023 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.024 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.025 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.025 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.062 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.070 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 66f3a080-c034-4465-9d17-ee4b4afe4592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:52.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.414 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 66f3a080-c034-4465-9d17-ee4b4afe4592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.531 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] resizing rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:14:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:52.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.687 2 DEBUG nova.objects.instance [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'migration_context' on Instance uuid 66f3a080-c034-4465-9d17-ee4b4afe4592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.708 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.708 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Ensure instance console log exists: /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.709 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.710 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.710 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:52 np0005466030 nova_compute[230518]: 2025-10-02 13:14:52.829 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Successfully created port: 6cd7cac1-06dc-4d61-8f7e-254639151526 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:14:53 np0005466030 nova_compute[230518]: 2025-10-02 13:14:53.834 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Successfully updated port: 6cd7cac1-06dc-4d61-8f7e-254639151526 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:14:53 np0005466030 nova_compute[230518]: 2025-10-02 13:14:53.871 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:53 np0005466030 nova_compute[230518]: 2025-10-02 13:14:53.871 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquired lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:53 np0005466030 nova_compute[230518]: 2025-10-02 13:14:53.872 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:14:53 np0005466030 nova_compute[230518]: 2025-10-02 13:14:53.938 2 DEBUG nova.compute.manager [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:53 np0005466030 nova_compute[230518]: 2025-10-02 13:14:53.939 2 DEBUG nova.compute.manager [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing instance network info cache due to event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:14:53 np0005466030 nova_compute[230518]: 2025-10-02 13:14:53.939 2 DEBUG oslo_concurrency.lockutils [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:14:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:54 np0005466030 nova_compute[230518]: 2025-10-02 13:14:54.114 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:14:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:14:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:14:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:54.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.062 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.088 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Releasing lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.089 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance network_info: |[{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.090 2 DEBUG oslo_concurrency.lockutils [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.091 2 DEBUG nova.network.neutron [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.097 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start _get_guest_xml network_info=[{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.105 2 WARNING nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.113 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.114 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.133 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.134 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.137 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.138 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.138 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.139 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.139 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.139 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.140 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.140 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.140 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.141 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.141 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.142 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.146 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Oct  2 09:14:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4017726753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.660 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.689 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:55 np0005466030 nova_compute[230518]: 2025-10-02 13:14:55.693 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:14:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2928502249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:14:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 09:14:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.420 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.422 2 DEBUG nova.virt.libvirt.vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-767527456',display_name='tempest-AttachVolumeNegativeTest-server-767527456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-767527456',id=210,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5VaY77OAaLVdyQYKCuQXqh8pYOP/3dwBveO9XzioRmwadp/WDR+EGEZOgEjr24GEhY0irDpHujXdAm06z7JsMiv1FBIuW6/qTrYjhQtIvMDaDJJ7Ig/IjNGFQrGpEpKA==',key_name='tempest-keypair-1210391514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-lrtxjlpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=66f3a080-c034-4465-9d17-ee4b4afe4592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.424 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.425 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.427 2 DEBUG nova.objects.instance [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66f3a080-c034-4465-9d17-ee4b4afe4592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.446 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <uuid>66f3a080-c034-4465-9d17-ee4b4afe4592</uuid>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <name>instance-000000d2</name>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <nova:name>tempest-AttachVolumeNegativeTest-server-767527456</nova:name>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:14:55</nova:creationTime>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:user uuid="37083e5fd56c447cb409b86d6394dd43">tempest-AttachVolumeNegativeTest-1084646737-project-member</nova:user>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:project uuid="7f5376733aec4630998da8d11db76561">tempest-AttachVolumeNegativeTest-1084646737</nova:project>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <nova:port uuid="6cd7cac1-06dc-4d61-8f7e-254639151526">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <entry name="serial">66f3a080-c034-4465-9d17-ee4b4afe4592</entry>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <entry name="uuid">66f3a080-c034-4465-9d17-ee4b4afe4592</entry>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/66f3a080-c034-4465-9d17-ee4b4afe4592_disk">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:b5:31:6a"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <target dev="tap6cd7cac1-06"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/console.log" append="off"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:14:56 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:14:56 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:14:56 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:14:56 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.447 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Preparing to wait for external event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.448 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.448 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.448 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.449 2 DEBUG nova.virt.libvirt.vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-767527456',display_name='tempest-AttachVolumeNegativeTest-server-767527456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-767527456',id=210,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5VaY77OAaLVdyQYKCuQXqh8pYOP/3dwBveO9XzioRmwadp/WDR+EGEZOgEjr24GEhY0irDpHujXdAm06z7JsMiv1FBIuW6/qTrYjhQtIvMDaDJJ7Ig/IjNGFQrGpEpKA==',key_name='tempest-keypair-1210391514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-lrtxjlpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=66f3a080-c034-4465-9d17-ee4b4afe4592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.449 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.450 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.451 2 DEBUG os_vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cd7cac1-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6cd7cac1-06, col_values=(('external_ids', {'iface-id': '6cd7cac1-06dc-4d61-8f7e-254639151526', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:31:6a', 'vm-uuid': '66f3a080-c034-4465-9d17-ee4b4afe4592'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005466030 NetworkManager[44960]: <info>  [1759410896.4600] manager: (tap6cd7cac1-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.468 2 INFO os_vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06')#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.551 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.552 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.552 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:b5:31:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.553 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Using config drive#033[00m
Oct  2 09:14:56 np0005466030 nova_compute[230518]: 2025-10-02 13:14:56.583 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.260 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Creating config drive at /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config#033[00m
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.268 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_r9k54y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.436 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_r9k54y" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.495 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.499 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.748 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.750 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deleting local config drive /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config because it was imported into RBD.#033[00m
Oct  2 09:14:57 np0005466030 kernel: tap6cd7cac1-06: entered promiscuous mode
Oct  2 09:14:57 np0005466030 NetworkManager[44960]: <info>  [1759410897.8379] manager: (tap6cd7cac1-06): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Oct  2 09:14:57 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:57Z|00852|binding|INFO|Claiming lport 6cd7cac1-06dc-4d61-8f7e-254639151526 for this chassis.
Oct  2 09:14:57 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:57Z|00853|binding|INFO|6cd7cac1-06dc-4d61-8f7e-254639151526: Claiming fa:16:3e:b5:31:6a 10.100.0.10
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.853 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:31:6a 10.100.0.10'], port_security=['fa:16:3e:b5:31:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '66f3a080-c034-4465-9d17-ee4b4afe4592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ce33b16-0c5a-4529-af8d-ea1438ef3f9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6cd7cac1-06dc-4d61-8f7e-254639151526) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.855 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6cd7cac1-06dc-4d61-8f7e-254639151526 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 bound to our chassis#033[00m
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.857 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9#033[00m
Oct  2 09:14:57 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:57Z|00854|binding|INFO|Setting lport 6cd7cac1-06dc-4d61-8f7e-254639151526 ovn-installed in OVS
Oct  2 09:14:57 np0005466030 ovn_controller[129257]: 2025-10-02T13:14:57Z|00855|binding|INFO|Setting lport 6cd7cac1-06dc-4d61-8f7e-254639151526 up in Southbound
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:57 np0005466030 nova_compute[230518]: 2025-10-02 13:14:57.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a0638c-1114-4bc1-8db4-d19b0e661b47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:57 np0005466030 systemd-udevd[310550]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:14:57 np0005466030 systemd-machined[188247]: New machine qemu-97-instance-000000d2.
Oct  2 09:14:57 np0005466030 NetworkManager[44960]: <info>  [1759410897.9002] device (tap6cd7cac1-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:14:57 np0005466030 NetworkManager[44960]: <info>  [1759410897.9014] device (tap6cd7cac1-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:14:57 np0005466030 systemd[1]: Started Virtual Machine qemu-97-instance-000000d2.
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.914 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4e367f54-eaa4-48a0-be7e-403bd9b595f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.917 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[be897465-d336-4ed9-b751-ccdf5aa9e423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.954 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae0a565-8403-41c4-ac0b-c6287e46090a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:57 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.982 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[faebce6c-97b1-4119-8ea5-f6ab7ea5bfe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 32949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310561, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.999 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f2681fae-fcc7-44c7-8edf-161c77d8f246]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870603, 'tstamp': 870603}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310562, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870607, 'tstamp': 870607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310562, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:14:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.001 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:58 np0005466030 nova_compute[230518]: 2025-10-02 13:14:58.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:58 np0005466030 nova_compute[230518]: 2025-10-02 13:14:58.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:14:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.030 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.031 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:14:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.031 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:14:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.031 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:14:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:58.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:58 np0005466030 nova_compute[230518]: 2025-10-02 13:14:58.478 2 DEBUG nova.network.neutron [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updated VIF entry in instance network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:14:58 np0005466030 nova_compute[230518]: 2025-10-02 13:14:58.479 2 DEBUG nova.network.neutron [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:14:58 np0005466030 nova_compute[230518]: 2025-10-02 13:14:58.507 2 DEBUG oslo_concurrency.lockutils [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:14:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:14:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:14:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:58.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:14:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Oct  2 09:14:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.156 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410899.1556535, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.157 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Started (Lifecycle Event)#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.196 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.203 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410899.1601574, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.203 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.233 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.240 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.272 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.448 2 DEBUG nova.compute.manager [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.448 2 DEBUG oslo_concurrency.lockutils [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.449 2 DEBUG oslo_concurrency.lockutils [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.449 2 DEBUG oslo_concurrency.lockutils [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.450 2 DEBUG nova.compute.manager [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Processing event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.451 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.454 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410899.4545662, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.455 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.458 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.463 2 INFO nova.virt.libvirt.driver [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance spawned successfully.#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.463 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.561 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.568 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.568 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.569 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.569 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.570 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.570 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.576 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.626 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.681 2 INFO nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 7.83 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.682 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.749 2 INFO nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 9.11 seconds to build instance.#033[00m
Oct  2 09:14:59 np0005466030 nova_compute[230518]: 2025-10-02 13:14:59.768 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:00.405 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:00 np0005466030 nova_compute[230518]: 2025-10-02 13:15:00.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:00.408 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:15:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:15:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:00.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:15:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.575 2 DEBUG nova.compute.manager [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.576 2 DEBUG oslo_concurrency.lockutils [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.576 2 DEBUG oslo_concurrency.lockutils [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.576 2 DEBUG oslo_concurrency.lockutils [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.577 2 DEBUG nova.compute.manager [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] No waiting events found dispatching network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:15:01 np0005466030 nova_compute[230518]: 2025-10-02 13:15:01.577 2 WARNING nova.compute.manager [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received unexpected event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:15:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:02.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:03 np0005466030 nova_compute[230518]: 2025-10-02 13:15:03.757 2 DEBUG nova.compute.manager [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:03 np0005466030 nova_compute[230518]: 2025-10-02 13:15:03.757 2 DEBUG nova.compute.manager [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing instance network info cache due to event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:15:03 np0005466030 nova_compute[230518]: 2025-10-02 13:15:03.758 2 DEBUG oslo_concurrency.lockutils [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:15:03 np0005466030 nova_compute[230518]: 2025-10-02 13:15:03.758 2 DEBUG oslo_concurrency.lockutils [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:15:03 np0005466030 nova_compute[230518]: 2025-10-02 13:15:03.759 2 DEBUG nova.network.neutron [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:15:03 np0005466030 podman[310607]: 2025-10-02 13:15:03.830758614 +0000 UTC m=+0.062159191 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:15:03 np0005466030 podman[310606]: 2025-10-02 13:15:03.880915877 +0000 UTC m=+0.121535883 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 09:15:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:04.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:15:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:15:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Oct  2 09:15:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:15:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:15:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:15:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:15:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:05.412 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:05 np0005466030 nova_compute[230518]: 2025-10-02 13:15:05.669 2 DEBUG nova.network.neutron [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updated VIF entry in instance network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:15:05 np0005466030 nova_compute[230518]: 2025-10-02 13:15:05.669 2 DEBUG nova.network.neutron [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:05 np0005466030 nova_compute[230518]: 2025-10-02 13:15:05.702 2 DEBUG oslo_concurrency.lockutils [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:15:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Oct  2 09:15:06 np0005466030 nova_compute[230518]: 2025-10-02 13:15:06.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Oct  2 09:15:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:06.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:06 np0005466030 nova_compute[230518]: 2025-10-02 13:15:06.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:06.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:15:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:15:06 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:15:06 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:15:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:15:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:15:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:15:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:15:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:08.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:10.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:10 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:10Z|00856|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:15:10 np0005466030 nova_compute[230518]: 2025-10-02 13:15:10.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Oct  2 09:15:11 np0005466030 nova_compute[230518]: 2025-10-02 13:15:11.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:15:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:15:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:15:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:15:11 np0005466030 nova_compute[230518]: 2025-10-02 13:15:11.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:12Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:31:6a 10.100.0.10
Oct  2 09:15:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:12Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:31:6a 10.100.0.10
Oct  2 09:15:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:12.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:12.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:12 np0005466030 podman[310651]: 2025-10-02 13:15:12.816777747 +0000 UTC m=+0.059289070 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct  2 09:15:12 np0005466030 podman[310650]: 2025-10-02 13:15:12.819400379 +0000 UTC m=+0.065884078 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  2 09:15:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:14.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:14Z|00857|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:15:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:14.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:14 np0005466030 nova_compute[230518]: 2025-10-02 13:15:14.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:14Z|00858|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:15:15 np0005466030 nova_compute[230518]: 2025-10-02 13:15:15.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:16 np0005466030 nova_compute[230518]: 2025-10-02 13:15:16.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Oct  2 09:15:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:16.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:16 np0005466030 nova_compute[230518]: 2025-10-02 13:15:16.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:16.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:18.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:18.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:18 np0005466030 nova_compute[230518]: 2025-10-02 13:15:18.868 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:18 np0005466030 nova_compute[230518]: 2025-10-02 13:15:18.869 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:18 np0005466030 nova_compute[230518]: 2025-10-02 13:15:18.869 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:18 np0005466030 nova_compute[230518]: 2025-10-02 13:15:18.870 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:18 np0005466030 nova_compute[230518]: 2025-10-02 13:15:18.870 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:18 np0005466030 nova_compute[230518]: 2025-10-02 13:15:18.871 2 INFO nova.compute.manager [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Terminating instance#033[00m
Oct  2 09:15:18 np0005466030 nova_compute[230518]: 2025-10-02 13:15:18.872 2 DEBUG nova.compute.manager [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:15:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:19Z|00859|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 kernel: tap6cd7cac1-06 (unregistering): left promiscuous mode
Oct  2 09:15:19 np0005466030 NetworkManager[44960]: <info>  [1759410919.6168] device (tap6cd7cac1-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:15:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:19Z|00860|binding|INFO|Releasing lport 6cd7cac1-06dc-4d61-8f7e-254639151526 from this chassis (sb_readonly=0)
Oct  2 09:15:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:19Z|00861|binding|INFO|Setting lport 6cd7cac1-06dc-4d61-8f7e-254639151526 down in Southbound
Oct  2 09:15:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:19Z|00862|binding|INFO|Removing iface tap6cd7cac1-06 ovn-installed in OVS
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.646 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:31:6a 10.100.0.10'], port_security=['fa:16:3e:b5:31:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '66f3a080-c034-4465-9d17-ee4b4afe4592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ce33b16-0c5a-4529-af8d-ea1438ef3f9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6cd7cac1-06dc-4d61-8f7e-254639151526) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.648 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6cd7cac1-06dc-4d61-8f7e-254639151526 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 unbound from our chassis#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.649 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.670 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c10843f7-b31a-4cd1-8ebe-fcb6fca4e507]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:19 np0005466030 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Oct  2 09:15:19 np0005466030 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d2.scope: Consumed 14.112s CPU time.
Oct  2 09:15:19 np0005466030 systemd-machined[188247]: Machine qemu-97-instance-000000d2 terminated.
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.699 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[903556f9-9138-4653-ac05-071acf607ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.702 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[153d303d-c074-41e8-8ff5-4586ae76149d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.732 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d37793-7bf7-49c7-8a1e-2c3e21a632f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.748 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46f4e613-ed6b-465f-90f6-78faf06145d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 32949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310700, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.765 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee2976c-a201-49e4-9fa8-55f2aaaa8e1f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870603, 'tstamp': 870603}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310701, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870607, 'tstamp': 870607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310701, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.767 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.774 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.774 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.775 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.775 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.904 2 INFO nova.virt.libvirt.driver [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance destroyed successfully.#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.905 2 DEBUG nova.objects.instance [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'resources' on Instance uuid 66f3a080-c034-4465-9d17-ee4b4afe4592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.917 2 DEBUG nova.virt.libvirt.vif [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-767527456',display_name='tempest-AttachVolumeNegativeTest-server-767527456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-767527456',id=210,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5VaY77OAaLVdyQYKCuQXqh8pYOP/3dwBveO9XzioRmwadp/WDR+EGEZOgEjr24GEhY0irDpHujXdAm06z7JsMiv1FBIuW6/qTrYjhQtIvMDaDJJ7Ig/IjNGFQrGpEpKA==',key_name='tempest-keypair-1210391514',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-lrtxjlpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=66f3a080-c034-4465-9d17-ee4b4afe4592,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.917 2 DEBUG nova.network.os_vif_util [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.918 2 DEBUG nova.network.os_vif_util [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.918 2 DEBUG os_vif [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cd7cac1-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.926 2 INFO os_vif [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06')#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.950 2 DEBUG nova.compute.manager [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-unplugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.951 2 DEBUG oslo_concurrency.lockutils [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.951 2 DEBUG oslo_concurrency.lockutils [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.952 2 DEBUG oslo_concurrency.lockutils [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.952 2 DEBUG nova.compute.manager [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] No waiting events found dispatching network-vif-unplugged-6cd7cac1-06dc-4d61-8f7e-254639151526 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:15:19 np0005466030 nova_compute[230518]: 2025-10-02 13:15:19.952 2 DEBUG nova.compute.manager [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-unplugged-6cd7cac1-06dc-4d61-8f7e-254639151526 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:15:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:15:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:20.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:15:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:20.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:21 np0005466030 nova_compute[230518]: 2025-10-02 13:15:21.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:22 np0005466030 nova_compute[230518]: 2025-10-02 13:15:22.098 2 DEBUG nova.compute.manager [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:22 np0005466030 nova_compute[230518]: 2025-10-02 13:15:22.098 2 DEBUG oslo_concurrency.lockutils [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:22 np0005466030 nova_compute[230518]: 2025-10-02 13:15:22.099 2 DEBUG oslo_concurrency.lockutils [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:22 np0005466030 nova_compute[230518]: 2025-10-02 13:15:22.099 2 DEBUG oslo_concurrency.lockutils [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:22 np0005466030 nova_compute[230518]: 2025-10-02 13:15:22.099 2 DEBUG nova.compute.manager [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] No waiting events found dispatching network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:15:22 np0005466030 nova_compute[230518]: 2025-10-02 13:15:22.099 2 WARNING nova.compute.manager [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received unexpected event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:15:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:23 np0005466030 nova_compute[230518]: 2025-10-02 13:15:23.985 2 INFO nova.virt.libvirt.driver [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deleting instance files /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592_del#033[00m
Oct  2 09:15:23 np0005466030 nova_compute[230518]: 2025-10-02 13:15:23.986 2 INFO nova.virt.libvirt.driver [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deletion of /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592_del complete#033[00m
Oct  2 09:15:24 np0005466030 nova_compute[230518]: 2025-10-02 13:15:24.049 2 INFO nova.compute.manager [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 5.18 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:15:24 np0005466030 nova_compute[230518]: 2025-10-02 13:15:24.049 2 DEBUG oslo.service.loopingcall [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:15:24 np0005466030 nova_compute[230518]: 2025-10-02 13:15:24.050 2 DEBUG nova.compute.manager [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:15:24 np0005466030 nova_compute[230518]: 2025-10-02 13:15:24.050 2 DEBUG nova.network.neutron [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:15:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:24.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:24 np0005466030 nova_compute[230518]: 2025-10-02 13:15:24.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.075 2 DEBUG nova.network.neutron [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.094 2 INFO nova.compute.manager [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 1.04 seconds to deallocate network for instance.#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.154 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.155 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.182 2 DEBUG nova.compute.manager [req-caf76fde-1c8e-4b74-9313-d0ceffa8570a req-309eaf82-7583-4814-be59-71045bef6c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-deleted-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.228 2 DEBUG oslo_concurrency.processutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/938968884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.689 2 DEBUG oslo_concurrency.processutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.694 2 DEBUG nova.compute.provider_tree [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.708 2 DEBUG nova.scheduler.client.report [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.729 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.753 2 INFO nova.scheduler.client.report [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Deleted allocations for instance 66f3a080-c034-4465-9d17-ee4b4afe4592#033[00m
Oct  2 09:15:25 np0005466030 nova_compute[230518]: 2025-10-02 13:15:25.824 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:26 np0005466030 nova_compute[230518]: 2025-10-02 13:15:26.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:26.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.460 2 DEBUG oslo_concurrency.lockutils [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.460 2 DEBUG oslo_concurrency.lockutils [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.473 2 INFO nova.compute.manager [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Detaching volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.650 2 INFO nova.virt.block_device [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attempting to driver detach volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2 from mountpoint /dev/vdb#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.661 2 DEBUG nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Attempting to detach device vdb from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.662 2 DEBUG nova.virt.libvirt.guest [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-205d78a9-2344-4c93-8e1b-54d92d0b0fa2">
Oct  2 09:15:27 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <serial>205d78a9-2344-4c93-8e1b-54d92d0b0fa2</serial>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:15:27 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.859 2 INFO nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the persistent domain config.#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.860 2 DEBUG nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.860 2 DEBUG nova.virt.libvirt.guest [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-205d78a9-2344-4c93-8e1b-54d92d0b0fa2">
Oct  2 09:15:27 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <serial>205d78a9-2344-4c93-8e1b-54d92d0b0fa2</serial>
Oct  2 09:15:27 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:15:27 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:15:27 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.967 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759410927.967262, 3dbb48be-2da9-48eb-814a-94eac9968d0f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.968 2 DEBUG nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3dbb48be-2da9-48eb-814a-94eac9968d0f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:15:27 np0005466030 nova_compute[230518]: 2025-10-02 13:15:27.970 2 INFO nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the live domain config.#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.160 2 DEBUG nova.objects.instance [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.195 2 DEBUG oslo_concurrency.lockutils [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.908 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.909 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.909 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.910 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.910 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.911 2 INFO nova.compute.manager [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Terminating instance#033[00m
Oct  2 09:15:28 np0005466030 nova_compute[230518]: 2025-10-02 13:15:28.913 2 DEBUG nova.compute.manager [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.088 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:29 np0005466030 kernel: tap27cf437c-6f (unregistering): left promiscuous mode
Oct  2 09:15:29 np0005466030 NetworkManager[44960]: <info>  [1759410929.2100] device (tap27cf437c-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:29 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:29Z|00863|binding|INFO|Releasing lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 from this chassis (sb_readonly=0)
Oct  2 09:15:29 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:29Z|00864|binding|INFO|Setting lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 down in Southbound
Oct  2 09:15:29 np0005466030 ovn_controller[129257]: 2025-10-02T13:15:29Z|00865|binding|INFO|Removing iface tap27cf437c-6f ovn-installed in OVS
Oct  2 09:15:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.227 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:cb:e4 10.100.0.7'], port_security=['fa:16:3e:38:cb:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3dbb48be-2da9-48eb-814a-94eac9968d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5299c659-7804-482f-bd2a-becd049c9d51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=27cf437c-6f1f-4511-8b2a-3d68dd116906) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:15:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.229 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 27cf437c-6f1f-4511-8b2a-3d68dd116906 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 unbound from our chassis#033[00m
Oct  2 09:15:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.230 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc02aa54-d19f-4274-8d92-cbabe7917dd9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:15:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.232 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[32ca9d42-3c55-478c-a7fa-c57c2580a7fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.232 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace which is not needed anymore#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:29 np0005466030 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Oct  2 09:15:29 np0005466030 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d0.scope: Consumed 16.768s CPU time.
Oct  2 09:15:29 np0005466030 systemd-machined[188247]: Machine qemu-96-instance-000000d0 terminated.
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.351 2 INFO nova.virt.libvirt.driver [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance destroyed successfully.#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.352 2 DEBUG nova.objects.instance [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'resources' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.370 2 DEBUG nova.virt.libvirt.vif [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:13:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1054588037',display_name='tempest-AttachVolumeNegativeTest-server-1054588037',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1054588037',id=208,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQBraTImbOHfTH+zcfBRFJyePeIqcOFlGsPR6ZMRcMYMVZGuN9g/lIgLTbs1qdUo4qDQMWoBvweu9Ok7nksgXVqglFfrHDG04CgWRfT+7Tk6OyYqf+SJMw2cYyCygZmlA==',key_name='tempest-keypair-212526163',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-tfow0fvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=3dbb48be-2da9-48eb-814a-94eac9968d0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.370 2 DEBUG nova.network.os_vif_util [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.371 2 DEBUG nova.network.os_vif_util [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.372 2 DEBUG os_vif [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27cf437c-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.380 2 INFO os_vif [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f')#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.417 2 DEBUG nova.compute.manager [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-unplugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.418 2 DEBUG oslo_concurrency.lockutils [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.419 2 DEBUG oslo_concurrency.lockutils [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.419 2 DEBUG oslo_concurrency.lockutils [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.420 2 DEBUG nova.compute.manager [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] No waiting events found dispatching network-vif-unplugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.420 2 DEBUG nova.compute.manager [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-unplugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.606 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:15:29 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : haproxy version is 2.8.14-c23fe91
Oct  2 09:15:29 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : path to executable is /usr/sbin/haproxy
Oct  2 09:15:29 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [WARNING]  (309834) : Exiting Master process...
Oct  2 09:15:29 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [WARNING]  (309834) : Exiting Master process...
Oct  2 09:15:29 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [ALERT]    (309834) : Current worker (309836) exited with code 143 (Terminated)
Oct  2 09:15:29 np0005466030 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [WARNING]  (309834) : All workers exited. Exiting... (0)
Oct  2 09:15:29 np0005466030 systemd[1]: libpod-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a.scope: Deactivated successfully.
Oct  2 09:15:29 np0005466030 podman[310805]: 2025-10-02 13:15:29.772815535 +0000 UTC m=+0.416044940 container died 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.878 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.879 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4184MB free_disk=20.942684173583984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.974 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3dbb48be-2da9-48eb-814a-94eac9968d0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.975 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:15:29 np0005466030 nova_compute[230518]: 2025-10-02 13:15:29.975 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:15:30 np0005466030 nova_compute[230518]: 2025-10-02 13:15:30.005 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 09:15:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:30.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 09:15:30 np0005466030 systemd[1]: var-lib-containers-storage-overlay-500922d7592b87d0de794352ea2980ab9aa55fc51cd3a81b2cc7670108d2967e-merged.mount: Deactivated successfully.
Oct  2 09:15:30 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a-userdata-shm.mount: Deactivated successfully.
Oct  2 09:15:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1328104633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:30 np0005466030 nova_compute[230518]: 2025-10-02 13:15:30.944 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.939s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:30 np0005466030 nova_compute[230518]: 2025-10-02 13:15:30.950 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.051 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:31 np0005466030 podman[310805]: 2025-10-02 13:15:31.186108476 +0000 UTC m=+1.829337871 container cleanup 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:15:31 np0005466030 systemd[1]: libpod-conmon-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a.scope: Deactivated successfully.
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.503 2 DEBUG nova.compute.manager [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG oslo_concurrency.lockutils [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG oslo_concurrency.lockutils [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG oslo_concurrency.lockutils [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG nova.compute.manager [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] No waiting events found dispatching network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:15:31 np0005466030 nova_compute[230518]: 2025-10-02 13:15:31.505 2 WARNING nova.compute.manager [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received unexpected event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:15:32 np0005466030 podman[310885]: 2025-10-02 13:15:32.029236702 +0000 UTC m=+0.808099289 container remove 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.038 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ef5b6d-3b0d-4f75-8df1-516c6ed18e12]: (4, ('Thu Oct  2 01:15:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a)\n3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a\nThu Oct  2 01:15:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a)\n3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.041 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7025817-89e3-48aa-8bdb-3e4a5258442b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.041 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:15:32 np0005466030 nova_compute[230518]: 2025-10-02 13:15:32.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:32 np0005466030 kernel: tapbc02aa54-d0: left promiscuous mode
Oct  2 09:15:32 np0005466030 nova_compute[230518]: 2025-10-02 13:15:32.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.050 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f48abee2-55b2-4299-8f50-22e4deaa705c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:32 np0005466030 nova_compute[230518]: 2025-10-02 13:15:32.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.083 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bd703fdf-0399-45bf-8029-9504795a2fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.085 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[287a0621-3ae6-4825-a1ee-de81d93d584f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.102 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6f806f-dfd7-4b4e-a714-99cb02733500]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870575, 'reachable_time': 35123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310901, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:32 np0005466030 systemd[1]: run-netns-ovnmeta\x2dbc02aa54\x2dd19f\x2d4274\x2d8d92\x2dcbabe7917dd9.mount: Deactivated successfully.
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.108 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:15:32 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.108 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[db4ba028-cab4-4b39-9738-e2cbb80f0075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:15:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:32.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:32.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:34 np0005466030 podman[310928]: 2025-10-02 13:15:34.161317547 +0000 UTC m=+0.103414835 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 09:15:34 np0005466030 podman[310927]: 2025-10-02 13:15:34.165776727 +0000 UTC m=+0.109236418 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct  2 09:15:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.260 2 INFO nova.virt.libvirt.driver [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deleting instance files /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f_del#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.262 2 INFO nova.virt.libvirt.driver [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deletion of /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f_del complete#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.311 2 INFO nova.compute.manager [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 5.40 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.312 2 DEBUG oslo.service.loopingcall [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.313 2 DEBUG nova.compute.manager [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.314 2 DEBUG nova.network.neutron [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:34.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:34.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.903 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410919.9022255, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.904 2 INFO nova.compute.manager [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:15:34 np0005466030 nova_compute[230518]: 2025-10-02 13:15:34.923 2 DEBUG nova.compute.manager [None req-e195015c-bd40-4802-915e-2dd744f7841a - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:15:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:15:35 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:15:35 np0005466030 nova_compute[230518]: 2025-10-02 13:15:35.902 2 DEBUG nova.network.neutron [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:15:35 np0005466030 nova_compute[230518]: 2025-10-02 13:15:35.920 2 INFO nova.compute.manager [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 1.61 seconds to deallocate network for instance.#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.020 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.020 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.053 2 DEBUG nova.compute.manager [req-878b5ccd-60bf-4955-9280-f68e28e96f24 req-7a6eff81-b7a3-4c90-957e-3c635eab71e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-deleted-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.075 2 DEBUG oslo_concurrency.processutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.107 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.112 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.113 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.113 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.113 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:15:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:15:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/188827119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.524 2 DEBUG oslo_concurrency.processutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.532 2 DEBUG nova.compute.provider_tree [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.558 2 DEBUG nova.scheduler.client.report [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.581 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.618 2 INFO nova.scheduler.client.report [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Deleted allocations for instance 3dbb48be-2da9-48eb-814a-94eac9968d0f#033[00m
Oct  2 09:15:36 np0005466030 nova_compute[230518]: 2025-10-02 13:15:36.688 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:15:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:15:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:15:37 np0005466030 nova_compute[230518]: 2025-10-02 13:15:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:38 np0005466030 nova_compute[230518]: 2025-10-02 13:15:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:38 np0005466030 nova_compute[230518]: 2025-10-02 13:15:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:15:38 np0005466030 nova_compute[230518]: 2025-10-02 13:15:38.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:15:38 np0005466030 nova_compute[230518]: 2025-10-02 13:15:38.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:15:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:38.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:39 np0005466030 nova_compute[230518]: 2025-10-02 13:15:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:39 np0005466030 nova_compute[230518]: 2025-10-02 13:15:39.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:15:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 66K writes, 260K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s#012Cumulative WAL: 66K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8130 writes, 30K keys, 8130 commit groups, 1.0 writes per commit group, ingest: 33.16 MB, 0.06 MB/s#012Interval WAL: 8131 writes, 3070 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:15:40 np0005466030 nova_compute[230518]: 2025-10-02 13:15:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:15:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:40.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:40.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:41 np0005466030 nova_compute[230518]: 2025-10-02 13:15:41.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:15:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:15:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:42.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:42.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:43 np0005466030 podman[311149]: 2025-10-02 13:15:43.810520384 +0000 UTC m=+0.061951294 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  2 09:15:43 np0005466030 podman[311150]: 2025-10-02 13:15:43.817229765 +0000 UTC m=+0.062091059 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true)
Oct  2 09:15:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:44 np0005466030 nova_compute[230518]: 2025-10-02 13:15:44.349 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410929.3479283, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:15:44 np0005466030 nova_compute[230518]: 2025-10-02 13:15:44.350 2 INFO nova.compute.manager [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:15:44 np0005466030 nova_compute[230518]: 2025-10-02 13:15:44.371 2 DEBUG nova.compute.manager [None req-f82a4546-1917-4784-b924-b8497c45c3f7 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:15:44 np0005466030 nova_compute[230518]: 2025-10-02 13:15:44.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:44.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:44.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:46 np0005466030 nova_compute[230518]: 2025-10-02 13:15:46.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:46.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:46.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:48.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:48.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:49 np0005466030 nova_compute[230518]: 2025-10-02 13:15:49.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:50.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:50.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:51 np0005466030 nova_compute[230518]: 2025-10-02 13:15:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:52.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:54 np0005466030 nova_compute[230518]: 2025-10-02 13:15:54.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:56 np0005466030 nova_compute[230518]: 2025-10-02 13:15:56.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:15:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:15:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:15:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:58.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:15:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:15:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:15:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:15:59 np0005466030 nova_compute[230518]: 2025-10-02 13:15:59.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:01 np0005466030 nova_compute[230518]: 2025-10-02 13:16:01.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:02.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:04 np0005466030 nova_compute[230518]: 2025-10-02 13:16:04.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:04.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:04 np0005466030 podman[311189]: 2025-10-02 13:16:04.814106331 +0000 UTC m=+0.056003667 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:16:04 np0005466030 podman[311188]: 2025-10-02 13:16:04.842189093 +0000 UTC m=+0.091019647 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:16:06 np0005466030 nova_compute[230518]: 2025-10-02 13:16:06.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:06.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:08.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:09 np0005466030 nova_compute[230518]: 2025-10-02 13:16:09.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:16:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 14K writes, 75K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1590 writes, 7740 keys, 1590 commit groups, 1.0 writes per commit group, ingest: 16.10 MB, 0.03 MB/s#012Interval WAL: 1590 writes, 1590 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.3      1.66              0.28        47    0.035       0      0       0.0       0.0#012  L6      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.1    116.4     99.2      4.67              1.42        46    0.102    327K    24K       0.0       0.0#012 Sum      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     85.9     87.7      6.33              1.70        93    0.068    327K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4     85.0     86.8      0.73              0.24        10    0.073     48K   2580       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0    116.4     99.2      4.67              1.42        46    0.102    327K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.4      1.66              0.28        46    0.036       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.090, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.54 GB write, 0.10 MB/s write, 0.53 GB read, 0.10 MB/s read, 6.3 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 59.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000312 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3440,57.38 MB,18.8744%) FilterBlock(93,913.05 KB,0.293305%) IndexBlock(93,1.51 MB,0.495318%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:16:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:10.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:11 np0005466030 nova_compute[230518]: 2025-10-02 13:16:11.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.160452) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971160492, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1809, "num_deletes": 262, "total_data_size": 4073040, "memory_usage": 4128304, "flush_reason": "Manual Compaction"}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971175346, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2675865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73722, "largest_seqno": 75526, "table_properties": {"data_size": 2668131, "index_size": 4611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16622, "raw_average_key_size": 20, "raw_value_size": 2652457, "raw_average_value_size": 3282, "num_data_blocks": 202, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410829, "oldest_key_time": 1759410829, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 14944 microseconds, and 5842 cpu microseconds.
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.175389) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2675865 bytes OK
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.175413) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.179877) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.179932) EVENT_LOG_v1 {"time_micros": 1759410971179920, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.179959) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 4064639, prev total WAL file size 4064639, number of live WAL files 2.
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.181086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373637' seq:72057594037927935, type:22 .. '6C6F676D0033303139' seq:0, type:0; will stop at (end)
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(2613KB)], [150(11MB)]
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971181123, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 14751264, "oldest_snapshot_seqno": -1}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 9809 keys, 14611687 bytes, temperature: kUnknown
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971300500, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 14611687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14545538, "index_size": 40500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 258068, "raw_average_key_size": 26, "raw_value_size": 14371017, "raw_average_value_size": 1465, "num_data_blocks": 1556, "num_entries": 9809, "num_filter_entries": 9809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.300764) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14611687 bytes
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.302077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.5 rd, 122.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.5 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 10348, records dropped: 539 output_compression: NoCompression
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.302091) EVENT_LOG_v1 {"time_micros": 1759410971302084, "job": 96, "event": "compaction_finished", "compaction_time_micros": 119451, "compaction_time_cpu_micros": 50643, "output_level": 6, "num_output_files": 1, "total_output_size": 14611687, "num_input_records": 10348, "num_output_records": 9809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971302622, "job": 96, "event": "table_file_deletion", "file_number": 152}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971304922, "job": 96, "event": "table_file_deletion", "file_number": 150}
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.180963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:11 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:16:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:14 np0005466030 nova_compute[230518]: 2025-10-02 13:16:14.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:14.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:14 np0005466030 podman[311233]: 2025-10-02 13:16:14.79878211 +0000 UTC m=+0.051693933 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 09:16:14 np0005466030 podman[311234]: 2025-10-02 13:16:14.85202681 +0000 UTC m=+0.086393251 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:16:16 np0005466030 nova_compute[230518]: 2025-10-02 13:16:16.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:18.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:18.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:19 np0005466030 nova_compute[230518]: 2025-10-02 13:16:19.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:16:20.298 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:16:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:16:20.299 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:16:20 np0005466030 nova_compute[230518]: 2025-10-02 13:16:20.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:20.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:20.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:21 np0005466030 nova_compute[230518]: 2025-10-02 13:16:21.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:22.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:22.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:16:23.303 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:16:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:24 np0005466030 nova_compute[230518]: 2025-10-02 13:16:24.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:24.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:16:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:16:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:16:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:26 np0005466030 nova_compute[230518]: 2025-10-02 13:16:26.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:26.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:28.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:28.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:29 np0005466030 nova_compute[230518]: 2025-10-02 13:16:29.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3079176688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:30.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.573 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.760 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.762 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4231MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.762 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.762 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:16:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:30.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.874 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.875 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.895 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.941 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:16:30 np0005466030 nova_compute[230518]: 2025-10-02 13:16:30.941 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:16:31 np0005466030 nova_compute[230518]: 2025-10-02 13:16:31.016 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:16:31 np0005466030 nova_compute[230518]: 2025-10-02 13:16:31.046 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:16:31 np0005466030 nova_compute[230518]: 2025-10-02 13:16:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:31 np0005466030 nova_compute[230518]: 2025-10-02 13:16:31.130 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:16:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:16:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1391326950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:16:31 np0005466030 nova_compute[230518]: 2025-10-02 13:16:31.760 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:16:31 np0005466030 nova_compute[230518]: 2025-10-02 13:16:31.766 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:16:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:32.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:32 np0005466030 nova_compute[230518]: 2025-10-02 13:16:32.821 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:16:32 np0005466030 nova_compute[230518]: 2025-10-02 13:16:32.851 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:16:32 np0005466030 nova_compute[230518]: 2025-10-02 13:16:32.852 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:16:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:34 np0005466030 nova_compute[230518]: 2025-10-02 13:16:34.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:34.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:34.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:35 np0005466030 nova_compute[230518]: 2025-10-02 13:16:35.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:35 np0005466030 nova_compute[230518]: 2025-10-02 13:16:35.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:35 np0005466030 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:35 np0005466030 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:35 np0005466030 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:16:35 np0005466030 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:35 np0005466030 podman[311318]: 2025-10-02 13:16:35.816992718 +0000 UTC m=+0.049855105 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:16:35 np0005466030 podman[311317]: 2025-10-02 13:16:35.832344049 +0000 UTC m=+0.079808623 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:16:36 np0005466030 nova_compute[230518]: 2025-10-02 13:16:36.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:36.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:36.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:37 np0005466030 nova_compute[230518]: 2025-10-02 13:16:37.086 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:38 np0005466030 nova_compute[230518]: 2025-10-02 13:16:38.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:38 np0005466030 nova_compute[230518]: 2025-10-02 13:16:38.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:16:38 np0005466030 nova_compute[230518]: 2025-10-02 13:16:38.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:16:38 np0005466030 nova_compute[230518]: 2025-10-02 13:16:38.085 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:16:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:38.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:38.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:39 np0005466030 nova_compute[230518]: 2025-10-02 13:16:39.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:40 np0005466030 nova_compute[230518]: 2025-10-02 13:16:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:40.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:40.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:41 np0005466030 nova_compute[230518]: 2025-10-02 13:16:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:41 np0005466030 nova_compute[230518]: 2025-10-02 13:16:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:42.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:42.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:43 np0005466030 nova_compute[230518]: 2025-10-02 13:16:43.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:43 np0005466030 nova_compute[230518]: 2025-10-02 13:16:43.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:16:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:44 np0005466030 nova_compute[230518]: 2025-10-02 13:16:44.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:16:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:44.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:16:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:44.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:45 np0005466030 nova_compute[230518]: 2025-10-02 13:16:45.059 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:45 np0005466030 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 09:16:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:16:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:45 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:16:45 np0005466030 podman[311613]: 2025-10-02 13:16:45.805495535 +0000 UTC m=+0.053753667 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:16:45 np0005466030 podman[311614]: 2025-10-02 13:16:45.821211758 +0000 UTC m=+0.067613161 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:16:46 np0005466030 nova_compute[230518]: 2025-10-02 13:16:46.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:46.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:46 np0005466030 nova_compute[230518]: 2025-10-02 13:16:46.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:16:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:46.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:48.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:49 np0005466030 nova_compute[230518]: 2025-10-02 13:16:49.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:50.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:51 np0005466030 nova_compute[230518]: 2025-10-02 13:16:51.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:16:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:16:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:52.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:16:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:54 np0005466030 nova_compute[230518]: 2025-10-02 13:16:54.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:54.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:54.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:56 np0005466030 nova_compute[230518]: 2025-10-02 13:16:56.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:16:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:16:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:16:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:16:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:16:59 np0005466030 nova_compute[230518]: 2025-10-02 13:16:59.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:00.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:01 np0005466030 nova_compute[230518]: 2025-10-02 13:17:01.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:02.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:02.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:04 np0005466030 nova_compute[230518]: 2025-10-02 13:17:04.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:04.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:17:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:17:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:17:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:17:06 np0005466030 nova_compute[230518]: 2025-10-02 13:17:06.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:06.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:06 np0005466030 podman[311705]: 2025-10-02 13:17:06.807361306 +0000 UTC m=+0.053830279 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct  2 09:17:06 np0005466030 podman[311704]: 2025-10-02 13:17:06.832305749 +0000 UTC m=+0.085704239 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct  2 09:17:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:06.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:08.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:09 np0005466030 nova_compute[230518]: 2025-10-02 13:17:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:10.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:10.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:11 np0005466030 nova_compute[230518]: 2025-10-02 13:17:11.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:12.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.180036) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033180068, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 896, "num_deletes": 251, "total_data_size": 1844974, "memory_usage": 1878224, "flush_reason": "Manual Compaction"}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033217422, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 1207876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75531, "largest_seqno": 76422, "table_properties": {"data_size": 1203607, "index_size": 1984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9581, "raw_average_key_size": 19, "raw_value_size": 1195082, "raw_average_value_size": 2484, "num_data_blocks": 85, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410972, "oldest_key_time": 1759410972, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 37427 microseconds, and 3635 cpu microseconds.
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.217463) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 1207876 bytes OK
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.217482) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.251902) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.251944) EVENT_LOG_v1 {"time_micros": 1759411033251935, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.251966) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1840371, prev total WAL file size 1840371, number of live WAL files 2.
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.252696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(1179KB)], [153(13MB)]
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033252753, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15819563, "oldest_snapshot_seqno": -1}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 9771 keys, 13946196 bytes, temperature: kUnknown
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033355543, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 13946196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13880908, "index_size": 39767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 257971, "raw_average_key_size": 26, "raw_value_size": 13707628, "raw_average_value_size": 1402, "num_data_blocks": 1520, "num_entries": 9771, "num_filter_entries": 9771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.355766) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13946196 bytes
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.389249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.8 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.9 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(24.6) write-amplify(11.5) OK, records in: 10290, records dropped: 519 output_compression: NoCompression
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.389301) EVENT_LOG_v1 {"time_micros": 1759411033389271, "job": 98, "event": "compaction_finished", "compaction_time_micros": 102850, "compaction_time_cpu_micros": 31592, "output_level": 6, "num_output_files": 1, "total_output_size": 13946196, "num_input_records": 10290, "num_output_records": 9771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033389627, "job": 98, "event": "table_file_deletion", "file_number": 155}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033392207, "job": 98, "event": "table_file_deletion", "file_number": 153}
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.252576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:14 np0005466030 nova_compute[230518]: 2025-10-02 13:17:14.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:14.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:17:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:14.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:17:16 np0005466030 nova_compute[230518]: 2025-10-02 13:17:16.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.379742) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036379782, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 282, "num_deletes": 251, "total_data_size": 76208, "memory_usage": 81704, "flush_reason": "Manual Compaction"}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036381889, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 49090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76427, "largest_seqno": 76704, "table_properties": {"data_size": 47203, "index_size": 115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5416, "raw_average_key_size": 20, "raw_value_size": 43494, "raw_average_value_size": 162, "num_data_blocks": 5, "num_entries": 268, "num_filter_entries": 268, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411034, "oldest_key_time": 1759411034, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 2182 microseconds, and 849 cpu microseconds.
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.381925) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 49090 bytes OK
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.381943) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383916) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383933) EVENT_LOG_v1 {"time_micros": 1759411036383928, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383950) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 74079, prev total WAL file size 74079, number of live WAL files 2.
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.384358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(47KB)], [156(13MB)]
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036384383, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 13995286, "oldest_snapshot_seqno": -1}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 9530 keys, 10147408 bytes, temperature: kUnknown
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036446585, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10147408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10088625, "index_size": 33838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 253162, "raw_average_key_size": 26, "raw_value_size": 9924406, "raw_average_value_size": 1041, "num_data_blocks": 1272, "num_entries": 9530, "num_filter_entries": 9530, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.446907) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10147408 bytes
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.448542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.7 rd, 162.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(491.8) write-amplify(206.7) OK, records in: 10039, records dropped: 509 output_compression: NoCompression
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.448573) EVENT_LOG_v1 {"time_micros": 1759411036448559, "job": 100, "event": "compaction_finished", "compaction_time_micros": 62296, "compaction_time_cpu_micros": 35764, "output_level": 6, "num_output_files": 1, "total_output_size": 10147408, "num_input_records": 10039, "num_output_records": 9530, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036448757, "job": 100, "event": "table_file_deletion", "file_number": 158}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036453696, "job": 100, "event": "table_file_deletion", "file_number": 156}
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.384301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:17:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:16 np0005466030 podman[311750]: 2025-10-02 13:17:16.809045809 +0000 UTC m=+0.056596386 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:17:16 np0005466030 podman[311749]: 2025-10-02 13:17:16.810468174 +0000 UTC m=+0.061390067 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:17:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:16.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:17:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:17:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:17:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:17:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:17:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:17:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:17:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:17:19 np0005466030 nova_compute[230518]: 2025-10-02 13:17:19.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:20 np0005466030 nova_compute[230518]: 2025-10-02 13:17:20.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:20 np0005466030 nova_compute[230518]: 2025-10-02 13:17:20.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:17:20 np0005466030 nova_compute[230518]: 2025-10-02 13:17:20.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:17:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:21 np0005466030 nova_compute[230518]: 2025-10-02 13:17:21.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:17:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:17:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:17:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:17:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct  2 09:17:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct  2 09:17:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:22.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:17:23.465 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:17:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:17:23.466 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:17:23 np0005466030 nova_compute[230518]: 2025-10-02 13:17:23.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:24 np0005466030 nova_compute[230518]: 2025-10-02 13:17:24.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:24.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:24.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:25 np0005466030 nova_compute[230518]: 2025-10-02 13:17:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:25 np0005466030 nova_compute[230518]: 2025-10-02 13:17:25.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:17:25.973 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:17:25.974 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:17:25.975 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:26 np0005466030 nova_compute[230518]: 2025-10-02 13:17:26.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:29 np0005466030 nova_compute[230518]: 2025-10-02 13:17:29.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:30.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.067 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.089 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.089 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:17:31.468 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:17:31 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:17:31 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/300480324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.607 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.748 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.749 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4212MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.847 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.847 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:17:31 np0005466030 nova_compute[230518]: 2025-10-02 13:17:31.864 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:17:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:17:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1010626883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:17:32 np0005466030 nova_compute[230518]: 2025-10-02 13:17:32.313 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:17:32 np0005466030 nova_compute[230518]: 2025-10-02 13:17:32.320 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:17:32 np0005466030 nova_compute[230518]: 2025-10-02 13:17:32.335 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:17:32 np0005466030 nova_compute[230518]: 2025-10-02 13:17:32.336 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:17:32 np0005466030 nova_compute[230518]: 2025-10-02 13:17:32.337 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:17:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Oct  2 09:17:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:32.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:34 np0005466030 nova_compute[230518]: 2025-10-02 13:17:34.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Oct  2 09:17:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:17:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:17:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:17:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:17:36 np0005466030 nova_compute[230518]: 2025-10-02 13:17:36.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:36 np0005466030 nova_compute[230518]: 2025-10-02 13:17:36.322 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:36 np0005466030 nova_compute[230518]: 2025-10-02 13:17:36.323 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:36 np0005466030 nova_compute[230518]: 2025-10-02 13:17:36.323 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:36 np0005466030 nova_compute[230518]: 2025-10-02 13:17:36.323 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:17:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:36.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:37 np0005466030 nova_compute[230518]: 2025-10-02 13:17:37.049 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:37 np0005466030 podman[311834]: 2025-10-02 13:17:37.80043397 +0000 UTC m=+0.046334675 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent)
Oct  2 09:17:37 np0005466030 podman[311833]: 2025-10-02 13:17:37.82916695 +0000 UTC m=+0.079793343 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct  2 09:17:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:38.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:39 np0005466030 nova_compute[230518]: 2025-10-02 13:17:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:39 np0005466030 nova_compute[230518]: 2025-10-02 13:17:39.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:40 np0005466030 nova_compute[230518]: 2025-10-02 13:17:40.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:40 np0005466030 nova_compute[230518]: 2025-10-02 13:17:40.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:17:40 np0005466030 nova_compute[230518]: 2025-10-02 13:17:40.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:17:40 np0005466030 nova_compute[230518]: 2025-10-02 13:17:40.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:17:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:41 np0005466030 nova_compute[230518]: 2025-10-02 13:17:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Oct  2 09:17:41 np0005466030 nova_compute[230518]: 2025-10-02 13:17:41.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:17:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:42.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:17:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:42.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:43 np0005466030 nova_compute[230518]: 2025-10-02 13:17:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:17:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:44 np0005466030 nova_compute[230518]: 2025-10-02 13:17:44.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:44.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:46 np0005466030 nova_compute[230518]: 2025-10-02 13:17:46.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:46.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:46.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:47 np0005466030 podman[311881]: 2025-10-02 13:17:47.799264633 +0000 UTC m=+0.057731212 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:17:47 np0005466030 podman[311882]: 2025-10-02 13:17:47.802020159 +0000 UTC m=+0.055188482 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:17:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:48.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:17:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:48.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:17:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:49 np0005466030 nova_compute[230518]: 2025-10-02 13:17:49.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:50.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:51 np0005466030 nova_compute[230518]: 2025-10-02 13:17:51.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:52 np0005466030 podman[312191]: 2025-10-02 13:17:52.557232366 +0000 UTC m=+0.029334982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 09:17:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:52 np0005466030 podman[312191]: 2025-10-02 13:17:52.866901308 +0000 UTC m=+0.339003904 container create 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  2 09:17:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:53 np0005466030 systemd[1]: Started libpod-conmon-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope.
Oct  2 09:17:53 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:17:53 np0005466030 podman[312191]: 2025-10-02 13:17:53.124608522 +0000 UTC m=+0.596711148 container init 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  2 09:17:53 np0005466030 podman[312191]: 2025-10-02 13:17:53.132660675 +0000 UTC m=+0.604763271 container start 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  2 09:17:53 np0005466030 vigilant_jepsen[312207]: 167 167
Oct  2 09:17:53 np0005466030 systemd[1]: libpod-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope: Deactivated successfully.
Oct  2 09:17:53 np0005466030 conmon[312207]: conmon 3a7bb1b1f47b3f8d921d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope/container/memory.events
Oct  2 09:17:53 np0005466030 podman[312191]: 2025-10-02 13:17:53.15420066 +0000 UTC m=+0.626303276 container attach 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct  2 09:17:53 np0005466030 podman[312191]: 2025-10-02 13:17:53.155686927 +0000 UTC m=+0.627789523 container died 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  2 09:17:53 np0005466030 systemd[1]: var-lib-containers-storage-overlay-bbdf6ff5aee07841ef3644475a497a49320d1921a2cfc0c62f79dac3c7e37c50-merged.mount: Deactivated successfully.
Oct  2 09:17:53 np0005466030 podman[312191]: 2025-10-02 13:17:53.200012858 +0000 UTC m=+0.672115454 container remove 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 09:17:53 np0005466030 systemd[1]: libpod-conmon-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope: Deactivated successfully.
Oct  2 09:17:53 np0005466030 podman[312231]: 2025-10-02 13:17:53.394894331 +0000 UTC m=+0.063495764 container create ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:17:53 np0005466030 systemd[1]: Started libpod-conmon-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope.
Oct  2 09:17:53 np0005466030 podman[312231]: 2025-10-02 13:17:53.356668481 +0000 UTC m=+0.025269914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  2 09:17:53 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:17:53 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  2 09:17:53 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  2 09:17:53 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  2 09:17:53 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  2 09:17:53 np0005466030 podman[312231]: 2025-10-02 13:17:53.490668215 +0000 UTC m=+0.159269668 container init ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  2 09:17:53 np0005466030 podman[312231]: 2025-10-02 13:17:53.497617132 +0000 UTC m=+0.166218545 container start ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:17:53 np0005466030 podman[312231]: 2025-10-02 13:17:53.501491894 +0000 UTC m=+0.170093347 container attach ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:17:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:54 np0005466030 nova_compute[230518]: 2025-10-02 13:17:54.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:54 np0005466030 great_ritchie[312247]: [
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:    {
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "available": false,
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "ceph_device": false,
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "lsm_data": {},
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "lvs": [],
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "path": "/dev/sr0",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "rejected_reasons": [
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "Has a FileSystem",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "Insufficient space (<5GB)"
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        ],
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        "sys_api": {
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "actuators": null,
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "device_nodes": "sr0",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "devname": "sr0",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "human_readable_size": "482.00 KB",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "id_bus": "ata",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "model": "QEMU DVD-ROM",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "nr_requests": "2",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "parent": "/dev/sr0",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "partitions": {},
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "path": "/dev/sr0",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "removable": "1",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "rev": "2.5+",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "ro": "0",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "rotational": "0",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "sas_address": "",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "sas_device_handle": "",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "scheduler_mode": "mq-deadline",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "sectors": 0,
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "sectorsize": "2048",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "size": 493568.0,
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "support_discard": "2048",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "type": "disk",
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:            "vendor": "QEMU"
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:        }
Oct  2 09:17:54 np0005466030 great_ritchie[312247]:    }
Oct  2 09:17:54 np0005466030 great_ritchie[312247]: ]
Oct  2 09:17:54 np0005466030 systemd[1]: libpod-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope: Deactivated successfully.
Oct  2 09:17:54 np0005466030 podman[312231]: 2025-10-02 13:17:54.594639363 +0000 UTC m=+1.263240776 container died ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  2 09:17:54 np0005466030 systemd[1]: libpod-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope: Consumed 1.078s CPU time.
Oct  2 09:17:54 np0005466030 systemd[1]: var-lib-containers-storage-overlay-37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77-merged.mount: Deactivated successfully.
Oct  2 09:17:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:54.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:54 np0005466030 podman[312231]: 2025-10-02 13:17:54.685892115 +0000 UTC m=+1.354493528 container remove ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  2 09:17:54 np0005466030 systemd[1]: libpod-conmon-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope: Deactivated successfully.
Oct  2 09:17:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:17:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:17:56 np0005466030 nova_compute[230518]: 2025-10-02 13:17:56.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:17:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:56.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:17:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:17:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:17:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:17:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:17:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:17:59 np0005466030 nova_compute[230518]: 2025-10-02 13:17:59.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:00.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Oct  2 09:18:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:00.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:01 np0005466030 nova_compute[230518]: 2025-10-02 13:18:01.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:18:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:18:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:04 np0005466030 nova_compute[230518]: 2025-10-02 13:18:04.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:04.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:06 np0005466030 nova_compute[230518]: 2025-10-02 13:18:06.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:08.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:08 np0005466030 podman[313413]: 2025-10-02 13:18:08.804589906 +0000 UTC m=+0.051046732 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:18:08 np0005466030 podman[313412]: 2025-10-02 13:18:08.866301932 +0000 UTC m=+0.112529041 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:18:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:09 np0005466030 nova_compute[230518]: 2025-10-02 13:18:09.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:10.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:10.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:11 np0005466030 nova_compute[230518]: 2025-10-02 13:18:11.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:12.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:14 np0005466030 nova_compute[230518]: 2025-10-02 13:18:14.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:14.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:16 np0005466030 ovn_controller[129257]: 2025-10-02T13:18:16Z|00866|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 09:18:16 np0005466030 nova_compute[230518]: 2025-10-02 13:18:16.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:16.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:18.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:18 np0005466030 podman[313458]: 2025-10-02 13:18:18.802125788 +0000 UTC m=+0.056808843 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Oct  2 09:18:18 np0005466030 podman[313457]: 2025-10-02 13:18:18.821232067 +0000 UTC m=+0.078130582 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:18:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:19 np0005466030 nova_compute[230518]: 2025-10-02 13:18:19.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:20.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:20.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:21 np0005466030 nova_compute[230518]: 2025-10-02 13:18:21.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:22.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:24 np0005466030 nova_compute[230518]: 2025-10-02 13:18:24.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:24.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Oct  2 09:18:25 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Oct  2 09:18:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:18:25.975 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:18:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:18:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:26 np0005466030 nova_compute[230518]: 2025-10-02 13:18:26.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:26.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:26.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:18:27.318 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:18:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:18:27.320 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:18:27 np0005466030 nova_compute[230518]: 2025-10-02 13:18:27.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:28.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:28.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:29 np0005466030 nova_compute[230518]: 2025-10-02 13:18:29.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:18:30.322 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:18:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:31 np0005466030 nova_compute[230518]: 2025-10-02 13:18:31.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.097 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:18:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:18:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/838882996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.514 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.690 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.691 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4212MB free_disk=20.942913055419922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.692 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.692 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:18:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:18:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.771 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.772 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:18:32 np0005466030 nova_compute[230518]: 2025-10-02 13:18:32.802 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:18:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:32.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:18:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/604091308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:18:33 np0005466030 nova_compute[230518]: 2025-10-02 13:18:33.283 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:18:33 np0005466030 nova_compute[230518]: 2025-10-02 13:18:33.291 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:18:33 np0005466030 nova_compute[230518]: 2025-10-02 13:18:33.307 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:18:33 np0005466030 nova_compute[230518]: 2025-10-02 13:18:33.309 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:18:33 np0005466030 nova_compute[230518]: 2025-10-02 13:18:33.309 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:18:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:34 np0005466030 nova_compute[230518]: 2025-10-02 13:18:34.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:34.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:36 np0005466030 nova_compute[230518]: 2025-10-02 13:18:36.309 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:36 np0005466030 nova_compute[230518]: 2025-10-02 13:18:36.310 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:36 np0005466030 nova_compute[230518]: 2025-10-02 13:18:36.310 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:18:36 np0005466030 nova_compute[230518]: 2025-10-02 13:18:36.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:36.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:37 np0005466030 nova_compute[230518]: 2025-10-02 13:18:37.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:39 np0005466030 nova_compute[230518]: 2025-10-02 13:18:39.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:39 np0005466030 nova_compute[230518]: 2025-10-02 13:18:39.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:39 np0005466030 podman[313543]: 2025-10-02 13:18:39.829730404 +0000 UTC m=+0.059508388 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct  2 09:18:39 np0005466030 podman[313542]: 2025-10-02 13:18:39.925303151 +0000 UTC m=+0.152974999 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  2 09:18:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:41 np0005466030 nova_compute[230518]: 2025-10-02 13:18:41.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:41 np0005466030 nova_compute[230518]: 2025-10-02 13:18:41.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:18:41 np0005466030 nova_compute[230518]: 2025-10-02 13:18:41.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:18:41 np0005466030 nova_compute[230518]: 2025-10-02 13:18:41.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:18:41 np0005466030 nova_compute[230518]: 2025-10-02 13:18:41.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:41 np0005466030 nova_compute[230518]: 2025-10-02 13:18:41.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:42.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:43 np0005466030 nova_compute[230518]: 2025-10-02 13:18:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Oct  2 09:18:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:44 np0005466030 nova_compute[230518]: 2025-10-02 13:18:44.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:45 np0005466030 nova_compute[230518]: 2025-10-02 13:18:45.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:45 np0005466030 nova_compute[230518]: 2025-10-02 13:18:45.983 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:18:46 np0005466030 nova_compute[230518]: 2025-10-02 13:18:46.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:48.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:49.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:49 np0005466030 nova_compute[230518]: 2025-10-02 13:18:49.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:49 np0005466030 podman[313587]: 2025-10-02 13:18:49.789400792 +0000 UTC m=+0.047777140 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:18:49 np0005466030 podman[313588]: 2025-10-02 13:18:49.803991089 +0000 UTC m=+0.058767173 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct  2 09:18:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:50.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:51.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:51 np0005466030 nova_compute[230518]: 2025-10-02 13:18:51.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Oct  2 09:18:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:52.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:18:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:18:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:54 np0005466030 nova_compute[230518]: 2025-10-02 13:18:54.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:54.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:56 np0005466030 nova_compute[230518]: 2025-10-02 13:18:56.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:18:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:56.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:57.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:18:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:18:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:59.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:18:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:18:59 np0005466030 nova_compute[230518]: 2025-10-02 13:18:59.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:00.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:01 np0005466030 nova_compute[230518]: 2025-10-02 13:19:01.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:04 np0005466030 nova_compute[230518]: 2025-10-02 13:19:04.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:06 np0005466030 nova_compute[230518]: 2025-10-02 13:19:06.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:19:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:06 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:19:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:08.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:09 np0005466030 nova_compute[230518]: 2025-10-02 13:19:09.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:10 np0005466030 podman[313756]: 2025-10-02 13:19:10.80907777 +0000 UTC m=+0.057076371 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:19:10 np0005466030 podman[313755]: 2025-10-02 13:19:10.864155918 +0000 UTC m=+0.112826820 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:19:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:11.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:11 np0005466030 nova_compute[230518]: 2025-10-02 13:19:11.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:13.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:19:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:14 np0005466030 nova_compute[230518]: 2025-10-02 13:19:14.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:14.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:15.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:16 np0005466030 nova_compute[230518]: 2025-10-02 13:19:16.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:16.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:17.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:19.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:19.344 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:19:19 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:19.345 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:19:19 np0005466030 nova_compute[230518]: 2025-10-02 13:19:19.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:19 np0005466030 nova_compute[230518]: 2025-10-02 13:19:19.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:20 np0005466030 podman[313848]: 2025-10-02 13:19:20.803395173 +0000 UTC m=+0.053565301 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct  2 09:19:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:20 np0005466030 podman[313849]: 2025-10-02 13:19:20.812432637 +0000 UTC m=+0.060942163 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:19:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:21.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:21 np0005466030 nova_compute[230518]: 2025-10-02 13:19:21.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:22.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:19:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:19:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:19:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:19:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:24 np0005466030 nova_compute[230518]: 2025-10-02 13:19:24.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:24.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.347 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.975 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:26 np0005466030 nova_compute[230518]: 2025-10-02 13:19:26.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:27.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:29.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:29 np0005466030 nova_compute[230518]: 2025-10-02 13:19:29.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:19:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:19:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:31 np0005466030 nova_compute[230518]: 2025-10-02 13:19:31.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.081 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.081 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:19:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3958918451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.609 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.798 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.800 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4230MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.800 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.801 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:32.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.936 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:19:32 np0005466030 nova_compute[230518]: 2025-10-02 13:19:32.936 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:19:33 np0005466030 nova_compute[230518]: 2025-10-02 13:19:33.017 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:19:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:19:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:19:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2642578138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:19:33 np0005466030 nova_compute[230518]: 2025-10-02 13:19:33.504 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:33 np0005466030 nova_compute[230518]: 2025-10-02 13:19:33.511 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:19:33 np0005466030 nova_compute[230518]: 2025-10-02 13:19:33.538 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:19:33 np0005466030 nova_compute[230518]: 2025-10-02 13:19:33.540 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:19:33 np0005466030 nova_compute[230518]: 2025-10-02 13:19:33.540 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Oct  2 09:19:34 np0005466030 nova_compute[230518]: 2025-10-02 13:19:34.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:34.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:36 np0005466030 nova_compute[230518]: 2025-10-02 13:19:36.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:36 np0005466030 nova_compute[230518]: 2025-10-02 13:19:36.541 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:36 np0005466030 nova_compute[230518]: 2025-10-02 13:19:36.541 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:19:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:36.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:38 np0005466030 nova_compute[230518]: 2025-10-02 13:19:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:38 np0005466030 nova_compute[230518]: 2025-10-02 13:19:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:38.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:39.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:39 np0005466030 nova_compute[230518]: 2025-10-02 13:19:39.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.703 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.703 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.739 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:19:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:40.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.914 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.915 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.919 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:19:40 np0005466030 nova_compute[230518]: 2025-10-02 13:19:40.920 2 INFO nova.compute.claims [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.348 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:19:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1616618197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.791 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.800 2 DEBUG nova.compute.provider_tree [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:19:41 np0005466030 podman[313952]: 2025-10-02 13:19:41.817495625 +0000 UTC m=+0.059559970 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:19:41 np0005466030 podman[313951]: 2025-10-02 13:19:41.837903355 +0000 UTC m=+0.087031811 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.890 2 DEBUG nova.scheduler.client.report [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.931 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:41 np0005466030 nova_compute[230518]: 2025-10-02 13:19:41.932 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.105 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.106 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.134 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.166 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.347 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.349 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.349 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Creating image(s)#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.386 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.424 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.451 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.456 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "bb6d192aed85f84d0f22da0723b257d38ce90e47" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.457 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "bb6d192aed85f84d0f22da0723b257d38ce90e47" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:42 np0005466030 nova_compute[230518]: 2025-10-02 13:19:42.730 2 DEBUG nova.policy [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '74f5186fabfb4fea86d32c8ef1f2e354', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:19:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:42.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:43 np0005466030 nova_compute[230518]: 2025-10-02 13:19:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:43 np0005466030 nova_compute[230518]: 2025-10-02 13:19:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:19:43 np0005466030 nova_compute[230518]: 2025-10-02 13:19:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:19:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:43.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:43 np0005466030 nova_compute[230518]: 2025-10-02 13:19:43.251 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  2 09:19:43 np0005466030 nova_compute[230518]: 2025-10-02 13:19:43.252 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:19:43 np0005466030 nova_compute[230518]: 2025-10-02 13:19:43.523 2 DEBUG nova.virt.libvirt.imagebackend [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/f6be8018-0ea2-42f8-a1d7-8d704069aac9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/f6be8018-0ea2-42f8-a1d7-8d704069aac9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  2 09:19:44 np0005466030 nova_compute[230518]: 2025-10-02 13:19:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:44 np0005466030 nova_compute[230518]: 2025-10-02 13:19:44.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:44 np0005466030 nova_compute[230518]: 2025-10-02 13:19:44.629 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Successfully created port: 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:19:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:45.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:45 np0005466030 nova_compute[230518]: 2025-10-02 13:19:45.976 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.052 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.053 2 DEBUG nova.virt.images [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] f6be8018-0ea2-42f8-a1d7-8d704069aac9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.055 2 DEBUG nova.privsep.utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.056 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.189 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Successfully updated port: 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.264 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.264 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.264 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.417 2 DEBUG nova.compute.manager [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.418 2 DEBUG nova.compute.manager [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.418 2 DEBUG oslo_concurrency.lockutils [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:46 np0005466030 nova_compute[230518]: 2025-10-02 13:19:46.640 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:19:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:46.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:47 np0005466030 nova_compute[230518]: 2025-10-02 13:19:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:19:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:47.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:48 np0005466030 nova_compute[230518]: 2025-10-02 13:19:48.042 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted" returned: 0 in 1.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:48 np0005466030 nova_compute[230518]: 2025-10-02 13:19:48.047 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:48 np0005466030 nova_compute[230518]: 2025-10-02 13:19:48.120 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:48 np0005466030 nova_compute[230518]: 2025-10-02 13:19:48.121 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "bb6d192aed85f84d0f22da0723b257d38ce90e47" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:48 np0005466030 nova_compute[230518]: 2025-10-02 13:19:48.152 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:19:48 np0005466030 nova_compute[230518]: 2025-10-02 13:19:48.156 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:48.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:49.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.350 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.375 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.376 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance network_info: |[{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.376 2 DEBUG oslo_concurrency.lockutils [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.377 2 DEBUG nova.network.neutron [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.629 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.752 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] resizing rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.919 2 DEBUG nova.objects.instance [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.935 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.936 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Ensure instance console log exists: /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.936 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.937 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.937 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.939 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start _get_guest_xml network_info=[{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T13:19:33Z,direct_url=<?>,disk_format='qcow2',id=f6be8018-0ea2-42f8-a1d7-8d704069aac9,min_disk=0,min_ram=0,name='tempest-scenario-img--1529097385',owner='ced4d30c525c44cca617c3b9838d21b7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T13:19:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': 'f6be8018-0ea2-42f8-a1d7-8d704069aac9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.942 2 WARNING nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.947 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.948 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.950 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.950 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.951 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.951 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T13:19:33Z,direct_url=<?>,disk_format='qcow2',id=f6be8018-0ea2-42f8-a1d7-8d704069aac9,min_disk=0,min_ram=0,name='tempest-scenario-img--1529097385',owner='ced4d30c525c44cca617c3b9838d21b7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T13:19:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.952 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.952 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.952 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.954 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.954 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:19:49 np0005466030 nova_compute[230518]: 2025-10-02 13:19:49.956 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:19:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1397213192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.444 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.477 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.483 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:19:50 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2262272927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.937 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.938 2 DEBUG nova.virt.libvirt.vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:19:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2037893726',display_name='tempest-TestMinimumBasicScenario-server-2037893726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2037893726',id=214,image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwVl8G5ieo8D6LRNceGyD0RzVHiNmAhn+oNx9JwxYWBR403mrZfQlZXBcadX/gFJtwpWDcUYsJ9PDNLmwgBCuRs7yyL95+8n31Ih8AeyaGOYLATIt1ABWixcUbVaElI8A==',key_name='tempest-TestMinimumBasicScenario-635426937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-joxmr5lv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:19:42Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=92ef7ede-4ed2-4a81-9849-bbc39c0be573,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.939 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.939 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.940 2 DEBUG nova.objects.instance [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.956 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <uuid>92ef7ede-4ed2-4a81-9849-bbc39c0be573</uuid>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <name>instance-000000d6</name>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestMinimumBasicScenario-server-2037893726</nova:name>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:19:49</nova:creationTime>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:user uuid="74f5186fabfb4fea86d32c8ef1f2e354">tempest-TestMinimumBasicScenario-1527105691-project-member</nova:user>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:project uuid="ced4d30c525c44cca617c3b9838d21b7">tempest-TestMinimumBasicScenario-1527105691</nova:project>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="f6be8018-0ea2-42f8-a1d7-8d704069aac9"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <nova:port uuid="585ce74b-9d9e-45eb-a324-9ce87a1fcec0">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <entry name="serial">92ef7ede-4ed2-4a81-9849-bbc39c0be573</entry>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <entry name="uuid">92ef7ede-4ed2-4a81-9849-bbc39c0be573</entry>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:04:8b:f7"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <target dev="tap585ce74b-9d"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/console.log" append="off"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:19:50 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:19:50 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:19:50 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:19:50 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.958 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Preparing to wait for external event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.958 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.958 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.959 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.959 2 DEBUG nova.virt.libvirt.vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:19:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2037893726',display_name='tempest-TestMinimumBasicScenario-server-2037893726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2037893726',id=214,image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwVl8G5ieo8D6LRNceGyD0RzVHiNmAhn+oNx9JwxYWBR403mrZfQlZXBcadX/gFJtwpWDcUYsJ9PDNLmwgBCuRs7yyL95+8n31Ih8AeyaGOYLATIt1ABWixcUbVaElI8A==',key_name='tempest-TestMinimumBasicScenario-635426937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-joxmr5lv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:19:42Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=92ef7ede-4ed2-4a81-9849-bbc39c0be573,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.960 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.960 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.961 2 DEBUG os_vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585ce74b-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap585ce74b-9d, col_values=(('external_ids', {'iface-id': '585ce74b-9d9e-45eb-a324-9ce87a1fcec0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:8b:f7', 'vm-uuid': '92ef7ede-4ed2-4a81-9849-bbc39c0be573'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:50 np0005466030 NetworkManager[44960]: <info>  [1759411190.9685] manager: (tap585ce74b-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:50 np0005466030 nova_compute[230518]: 2025-10-02 13:19:50.977 2 INFO os_vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d')#033[00m
Oct  2 09:19:51 np0005466030 nova_compute[230518]: 2025-10-02 13:19:51.061 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:19:51 np0005466030 nova_compute[230518]: 2025-10-02 13:19:51.062 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:19:51 np0005466030 nova_compute[230518]: 2025-10-02 13:19:51.062 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No VIF found with MAC fa:16:3e:04:8b:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:19:51 np0005466030 nova_compute[230518]: 2025-10-02 13:19:51.063 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Using config drive#033[00m
Oct  2 09:19:51 np0005466030 nova_compute[230518]: 2025-10-02 13:19:51.090 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:19:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:51.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:51 np0005466030 nova_compute[230518]: 2025-10-02 13:19:51.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:51 np0005466030 podman[314257]: 2025-10-02 13:19:51.801234904 +0000 UTC m=+0.052730555 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true)
Oct  2 09:19:51 np0005466030 podman[314256]: 2025-10-02 13:19:51.807132969 +0000 UTC m=+0.059092705 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.334 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Creating config drive at /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.338 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnzz7awj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.370 2 DEBUG nova.network.neutron [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.371 2 DEBUG nova.network.neutron [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.401 2 DEBUG oslo_concurrency.lockutils [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.476 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnzz7awj" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.503 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.506 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.677 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.678 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deleting local config drive /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config because it was imported into RBD.#033[00m
Oct  2 09:19:52 np0005466030 kernel: tap585ce74b-9d: entered promiscuous mode
Oct  2 09:19:52 np0005466030 NetworkManager[44960]: <info>  [1759411192.7335] manager: (tap585ce74b-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Oct  2 09:19:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:19:52Z|00867|binding|INFO|Claiming lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for this chassis.
Oct  2 09:19:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:19:52Z|00868|binding|INFO|585ce74b-9d9e-45eb-a324-9ce87a1fcec0: Claiming fa:16:3e:04:8b:f7 10.100.0.5
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.748 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.749 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 bound to our chassis#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.750 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 540159ad-ffd2-462a-a8b9-e86914ed6249#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.761 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b87a3239-ea74-4579-ad29-dae532e47ac6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.761 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap540159ad-f1 in ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.763 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap540159ad-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.763 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d37a2876-ff52-4760-bf6e-2bdad9b9a1b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.764 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4c0689-ae6f-47ad-ace0-9b5a4c7457ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 systemd-udevd[314346]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.774 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[03237844-201f-43d4-89c3-5924c37497fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 systemd-machined[188247]: New machine qemu-98-instance-000000d6.
Oct  2 09:19:52 np0005466030 NetworkManager[44960]: <info>  [1759411192.7840] device (tap585ce74b-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:19:52 np0005466030 NetworkManager[44960]: <info>  [1759411192.7851] device (tap585ce74b-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.799 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c2854a-611a-41ab-be4b-e494617974c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 systemd[1]: Started Virtual Machine qemu-98-instance-000000d6.
Oct  2 09:19:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:19:52Z|00869|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 ovn-installed in OVS
Oct  2 09:19:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:19:52Z|00870|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 up in Southbound
Oct  2 09:19:52 np0005466030 nova_compute[230518]: 2025-10-02 13:19:52.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.829 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce53dc4-873f-4d64-bb0e-b51ef57c6f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 systemd-udevd[314351]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:19:52 np0005466030 NetworkManager[44960]: <info>  [1759411192.8366] manager: (tap540159ad-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bba3d7c0-9554-41b0-a883-7a7d3b7ed28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:52.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.882 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5adb24-4554-4d9c-9074-17c863f180b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.885 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[80086011-175d-42a3-a8d6-d6cb95882af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 NetworkManager[44960]: <info>  [1759411192.9138] device (tap540159ad-f0): carrier: link connected
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.920 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[98671fbf-cffe-49d8-ae94-8d282b79a4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.938 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40c2c25f-83ac-4f42-a84c-d9db2a3676b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905647, 'reachable_time': 25374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314379, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.952 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d455842e-0aa5-46d6-a51b-3d622fc20717]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:9bb7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 905647, 'tstamp': 905647}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314380, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:52 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.968 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a401dac3-10d2-4ef6-9236-f5f599fc7aa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905647, 'reachable_time': 25374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314381, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.005 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05201cd4-d495-4189-8589-dbb402ccd5d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:53.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.192 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26c469f6-2523-4201-aec9-33409d124e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.194 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.194 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.194 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540159ad-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:53 np0005466030 kernel: tap540159ad-f0: entered promiscuous mode
Oct  2 09:19:53 np0005466030 NetworkManager[44960]: <info>  [1759411193.1969] manager: (tap540159ad-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.199 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap540159ad-f0, col_values=(('external_ids', {'iface-id': 'b64b1a3a-1d89-4a71-b9b0-71e964509167'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:19:53 np0005466030 ovn_controller[129257]: 2025-10-02T13:19:53Z|00871|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.214 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.215 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dc42b539-5df8-415d-a926-7266e08a0253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.216 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-540159ad-ffd2-462a-a8b9-e86914ed6249
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:19:53 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.217 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'env', 'PROCESS_TAG=haproxy-540159ad-ffd2-462a-a8b9-e86914ed6249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/540159ad-ffd2-462a-a8b9-e86914ed6249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:19:53 np0005466030 podman[314455]: 2025-10-02 13:19:53.581530066 +0000 UTC m=+0.020475143 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:19:53 np0005466030 podman[314455]: 2025-10-02 13:19:53.766122696 +0000 UTC m=+0.205067753 container create a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:19:53 np0005466030 systemd[1]: Started libpod-conmon-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8.scope.
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.839 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411193.8379366, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.839 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Started (Lifecycle Event)#033[00m
Oct  2 09:19:53 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:19:53 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a49f52618069d1ad7902e6a20a5811651f7f83eb122ae0dd39c392fa844ae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.897 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.901 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411193.8386834, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.902 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:19:53 np0005466030 podman[314455]: 2025-10-02 13:19:53.903268668 +0000 UTC m=+0.342213745 container init a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:19:53 np0005466030 podman[314455]: 2025-10-02 13:19:53.90876535 +0000 UTC m=+0.347710407 container start a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.922 2 DEBUG nova.compute.manager [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG oslo_concurrency.lockutils [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG oslo_concurrency.lockutils [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG oslo_concurrency.lockutils [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG nova.compute.manager [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Processing event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.924 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.927 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:19:53 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : New worker (314476) forked
Oct  2 09:19:53 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : Loading success.
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.930 2 INFO nova.virt.libvirt.driver [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance spawned successfully.#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.930 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.933 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.936 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411193.9264627, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.936 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.965 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.968 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.995 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.998 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.998 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.999 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:19:53 np0005466030 nova_compute[230518]: 2025-10-02 13:19:53.999 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:19:54 np0005466030 nova_compute[230518]: 2025-10-02 13:19:54.000 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:19:54 np0005466030 nova_compute[230518]: 2025-10-02 13:19:54.000 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:19:54 np0005466030 nova_compute[230518]: 2025-10-02 13:19:54.098 2 INFO nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 11.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:19:54 np0005466030 nova_compute[230518]: 2025-10-02 13:19:54.098 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:19:54 np0005466030 nova_compute[230518]: 2025-10-02 13:19:54.162 2 INFO nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 13.32 seconds to build instance.#033[00m
Oct  2 09:19:54 np0005466030 nova_compute[230518]: 2025-10-02 13:19:54.180 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:19:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:54.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:55 np0005466030 nova_compute[230518]: 2025-10-02 13:19:55.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:56 np0005466030 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG nova.compute.manager [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:19:56 np0005466030 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG oslo_concurrency.lockutils [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:19:56 np0005466030 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG oslo_concurrency.lockutils [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:19:56 np0005466030 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG oslo_concurrency.lockutils [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:19:56 np0005466030 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG nova.compute.manager [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:19:56 np0005466030 nova_compute[230518]: 2025-10-02 13:19:56.076 2 WARNING nova.compute.manager [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:19:56 np0005466030 nova_compute[230518]: 2025-10-02 13:19:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:19:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:56.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:19:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:57.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:19:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:58.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:19:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:19:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:59.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:19:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:00.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:00 np0005466030 nova_compute[230518]: 2025-10-02 13:20:00.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:00 np0005466030 nova_compute[230518]: 2025-10-02 13:20:00.997 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:00 np0005466030 nova_compute[230518]: 2025-10-02 13:20:00.997 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.018 2 DEBUG nova.objects.instance [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.095 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:01.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.455 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.456 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.456 2 INFO nova.compute.manager [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attaching volume 501f7163-061f-4829-9c05-ac69ebd0ace5 to /dev/vdb#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:01 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.693 2 DEBUG os_brick.utils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.695 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.708 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.708 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8bf121-ffaf-40a2-8eba-c1840d46c596]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.710 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.719 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.719 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3759425c-4463-49ae-a2a6-04db399e7852]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.721 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.730 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.731 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7df5bfee-70e2-45c0-8b6d-88ef4b80cc7e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.734 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee90cdc-5816-47f6-bb6f-258031709bbc]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.735 2 DEBUG oslo_concurrency.processutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.779 2 DEBUG oslo_concurrency.processutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "nvme version" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.783 2 DEBUG os_brick.initiator.connectors.lightos [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.784 2 DEBUG os_brick.initiator.connectors.lightos [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.785 2 DEBUG os_brick.initiator.connectors.lightos [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.785 2 DEBUG os_brick.utils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:20:01 np0005466030 nova_compute[230518]: 2025-10-02 13:20:01.785 2 DEBUG nova.virt.block_device [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating existing volume attachment record: 36d0d5ec-018e-4c04-a804-bceb43745029 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:20:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.035 2 DEBUG nova.objects.instance [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.068 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attempting to attach volume 501f7163-061f-4829-9c05-ac69ebd0ace5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.070 2 DEBUG nova.virt.libvirt.guest [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] attach device xml: <disk type="network" device="disk">
Oct  2 09:20:03 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-501f7163-061f-4829-9c05-ac69ebd0ace5">
Oct  2 09:20:03 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:  <auth username="openstack">
Oct  2 09:20:03 np0005466030 nova_compute[230518]:    <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:  </auth>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:20:03 np0005466030 nova_compute[230518]:  <serial>501f7163-061f-4829-9c05-ac69ebd0ace5</serial>
Oct  2 09:20:03 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:20:03 np0005466030 nova_compute[230518]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  2 09:20:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.623 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.624 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.624 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.624 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No VIF found with MAC fa:16:3e:04:8b:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:20:03 np0005466030 nova_compute[230518]: 2025-10-02 13:20:03.993 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:05.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:20:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:20:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:20:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:20:05 np0005466030 nova_compute[230518]: 2025-10-02 13:20:05.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:06 np0005466030 nova_compute[230518]: 2025-10-02 13:20:06.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:07.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:08.020 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:20:08 np0005466030 nova_compute[230518]: 2025-10-02 13:20:08.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:08 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:08.024 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:20:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:09 np0005466030 NetworkManager[44960]: <info>  [1759411209.1192] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct  2 09:20:09 np0005466030 NetworkManager[44960]: <info>  [1759411209.1218] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Oct  2 09:20:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:09.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:09 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:09Z|00872|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.441 2 DEBUG nova.compute.manager [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.442 2 DEBUG nova.compute.manager [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.442 2 DEBUG oslo_concurrency.lockutils [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.443 2 DEBUG oslo_concurrency.lockutils [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:20:09 np0005466030 nova_compute[230518]: 2025-10-02 13:20:09.443 2 DEBUG nova.network.neutron [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:20:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:10 np0005466030 nova_compute[230518]: 2025-10-02 13:20:10.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:11.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.338 2 DEBUG nova.network.neutron [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.339 2 DEBUG nova.network.neutron [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.366 2 DEBUG oslo_concurrency.lockutils [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:20:11 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:11Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:8b:f7 10.100.0.5
Oct  2 09:20:11 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:11Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:8b:f7 10.100.0.5
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.570 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.571 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.571 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.572 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.572 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:20:11 np0005466030 nova_compute[230518]: 2025-10-02 13:20:11.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:12 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:12.027 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:20:12 np0005466030 podman[314515]: 2025-10-02 13:20:12.799151987 +0000 UTC m=+0.053930883 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:20:12 np0005466030 podman[314514]: 2025-10-02 13:20:12.828061014 +0000 UTC m=+0.083276453 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:20:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:15 np0005466030 nova_compute[230518]: 2025-10-02 13:20:15.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.012 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.013 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.037 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.038 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.038 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.039 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.039 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.039 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:20:16 np0005466030 nova_compute[230518]: 2025-10-02 13:20:16.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:16 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:16.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:17.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:20:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:20:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:19.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:19 np0005466030 nova_compute[230518]: 2025-10-02 13:20:19.312 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:20:19 np0005466030 nova_compute[230518]: 2025-10-02 13:20:19.312 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:20:19 np0005466030 nova_compute[230518]: 2025-10-02 13:20:19.333 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:20:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:20 np0005466030 nova_compute[230518]: 2025-10-02 13:20:20.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:21 np0005466030 nova_compute[230518]: 2025-10-02 13:20:21.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:21 np0005466030 nova_compute[230518]: 2025-10-02 13:20:21.778 2 DEBUG nova.compute.manager [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:21 np0005466030 nova_compute[230518]: 2025-10-02 13:20:21.778 2 DEBUG nova.compute.manager [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:20:21 np0005466030 nova_compute[230518]: 2025-10-02 13:20:21.779 2 DEBUG oslo_concurrency.lockutils [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:20:21 np0005466030 nova_compute[230518]: 2025-10-02 13:20:21.779 2 DEBUG oslo_concurrency.lockutils [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:20:21 np0005466030 nova_compute[230518]: 2025-10-02 13:20:21.779 2 DEBUG nova.network.neutron [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:20:22 np0005466030 nova_compute[230518]: 2025-10-02 13:20:22.661 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:22 np0005466030 nova_compute[230518]: 2025-10-02 13:20:22.661 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:22 np0005466030 nova_compute[230518]: 2025-10-02 13:20:22.662 2 INFO nova.compute.manager [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Rebooting instance#033[00m
Oct  2 09:20:22 np0005466030 nova_compute[230518]: 2025-10-02 13:20:22.690 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:20:22 np0005466030 podman[314807]: 2025-10-02 13:20:22.799535279 +0000 UTC m=+0.054377536 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:20:22 np0005466030 podman[314808]: 2025-10-02 13:20:22.805247059 +0000 UTC m=+0.060053536 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:20:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:23 np0005466030 nova_compute[230518]: 2025-10-02 13:20:23.323 2 DEBUG nova.network.neutron [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:20:23 np0005466030 nova_compute[230518]: 2025-10-02 13:20:23.323 2 DEBUG nova.network.neutron [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:20:23 np0005466030 nova_compute[230518]: 2025-10-02 13:20:23.364 2 DEBUG oslo_concurrency.lockutils [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:20:23 np0005466030 nova_compute[230518]: 2025-10-02 13:20:23.365 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:20:23 np0005466030 nova_compute[230518]: 2025-10-02 13:20:23.365 2 DEBUG nova.network.neutron [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:20:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:24.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:25.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:25.977 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:25.977 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:25.978 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:25 np0005466030 nova_compute[230518]: 2025-10-02 13:20:25.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:26 np0005466030 nova_compute[230518]: 2025-10-02 13:20:26.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:26.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:27.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:27 np0005466030 nova_compute[230518]: 2025-10-02 13:20:27.691 2 DEBUG nova.network.neutron [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:20:27 np0005466030 nova_compute[230518]: 2025-10-02 13:20:27.744 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:20:27 np0005466030 nova_compute[230518]: 2025-10-02 13:20:27.746 2 DEBUG nova.compute.manager [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:20:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:28.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:29.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:20:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:30 np0005466030 nova_compute[230518]: 2025-10-02 13:20:30.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:31 np0005466030 nova_compute[230518]: 2025-10-02 13:20:31.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:32.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.099 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.099 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:33.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:20:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3264101197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.514 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.585 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.585 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.585 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.733 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3948MB free_disk=20.942710876464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.842 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.843 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.843 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:20:33 np0005466030 nova_compute[230518]: 2025-10-02 13:20:33.885 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:20:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:20:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3983555616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:20:34 np0005466030 nova_compute[230518]: 2025-10-02 13:20:34.344 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:20:34 np0005466030 nova_compute[230518]: 2025-10-02 13:20:34.352 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:20:34 np0005466030 nova_compute[230518]: 2025-10-02 13:20:34.371 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:20:34 np0005466030 nova_compute[230518]: 2025-10-02 13:20:34.402 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:20:34 np0005466030 nova_compute[230518]: 2025-10-02 13:20:34.402 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:35 np0005466030 nova_compute[230518]: 2025-10-02 13:20:35.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.354134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236354194, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2376, "num_deletes": 254, "total_data_size": 5870745, "memory_usage": 5943984, "flush_reason": "Manual Compaction"}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236374211, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3812293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76709, "largest_seqno": 79080, "table_properties": {"data_size": 3802420, "index_size": 6302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20243, "raw_average_key_size": 20, "raw_value_size": 3782831, "raw_average_value_size": 3856, "num_data_blocks": 273, "num_entries": 981, "num_filter_entries": 981, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411037, "oldest_key_time": 1759411037, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 20106 microseconds, and 8357 cpu microseconds.
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.374249) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3812293 bytes OK
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.374267) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.377216) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.377228) EVENT_LOG_v1 {"time_micros": 1759411236377224, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.377243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5860147, prev total WAL file size 5860147, number of live WAL files 2.
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.378449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3722KB)], [159(9909KB)]
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236378502, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 13959701, "oldest_snapshot_seqno": -1}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 9983 keys, 12022193 bytes, temperature: kUnknown
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236493473, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12022193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11958702, "index_size": 37458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263384, "raw_average_key_size": 26, "raw_value_size": 11784959, "raw_average_value_size": 1180, "num_data_blocks": 1420, "num_entries": 9983, "num_filter_entries": 9983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.493924) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12022193 bytes
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.496727) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.1 rd, 104.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 10511, records dropped: 528 output_compression: NoCompression
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.496744) EVENT_LOG_v1 {"time_micros": 1759411236496735, "job": 102, "event": "compaction_finished", "compaction_time_micros": 115242, "compaction_time_cpu_micros": 49159, "output_level": 6, "num_output_files": 1, "total_output_size": 12022193, "num_input_records": 10511, "num_output_records": 9983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236497778, "job": 102, "event": "table_file_deletion", "file_number": 161}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236499630, "job": 102, "event": "table_file_deletion", "file_number": 159}
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.378353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:20:36 np0005466030 nova_compute[230518]: 2025-10-02 13:20:36.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:36.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:37 np0005466030 nova_compute[230518]: 2025-10-02 13:20:37.403 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:37 np0005466030 nova_compute[230518]: 2025-10-02 13:20:37.404 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:20:38 np0005466030 nova_compute[230518]: 2025-10-02 13:20:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:38 np0005466030 nova_compute[230518]: 2025-10-02 13:20:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:38.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:39 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:39Z|00873|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct  2 09:20:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:40.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:41 np0005466030 nova_compute[230518]: 2025-10-02 13:20:41.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:41 np0005466030 nova_compute[230518]: 2025-10-02 13:20:41.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:41 np0005466030 nova_compute[230518]: 2025-10-02 13:20:41.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:42 np0005466030 kernel: tap585ce74b-9d (unregistering): left promiscuous mode
Oct  2 09:20:42 np0005466030 NetworkManager[44960]: <info>  [1759411242.6780] device (tap585ce74b-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:20:42 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:42Z|00874|binding|INFO|Releasing lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 from this chassis (sb_readonly=0)
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:42Z|00875|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 down in Southbound
Oct  2 09:20:42 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:42Z|00876|binding|INFO|Removing iface tap585ce74b-9d ovn-installed in OVS
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.710 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e a3caf5f5-413d-4b29-98a6-4d3bfef93aba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.712 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 unbound from our chassis#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.714 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 540159ad-ffd2-462a-a8b9-e86914ed6249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.716 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45847498-13f3-42be-bc89-952c89c1e0e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.717 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace which is not needed anymore#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466030 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Oct  2 09:20:42 np0005466030 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d6.scope: Consumed 15.298s CPU time.
Oct  2 09:20:42 np0005466030 systemd-machined[188247]: Machine qemu-98-instance-000000d6 terminated.
Oct  2 09:20:42 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : haproxy version is 2.8.14-c23fe91
Oct  2 09:20:42 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : path to executable is /usr/sbin/haproxy
Oct  2 09:20:42 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [WARNING]  (314474) : Exiting Master process...
Oct  2 09:20:42 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [WARNING]  (314474) : Exiting Master process...
Oct  2 09:20:42 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [ALERT]    (314474) : Current worker (314476) exited with code 143 (Terminated)
Oct  2 09:20:42 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [WARNING]  (314474) : All workers exited. Exiting... (0)
Oct  2 09:20:42 np0005466030 systemd[1]: libpod-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8.scope: Deactivated successfully.
Oct  2 09:20:42 np0005466030 podman[314964]: 2025-10-02 13:20:42.860441115 +0000 UTC m=+0.044514987 container died a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:20:42 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8-userdata-shm.mount: Deactivated successfully.
Oct  2 09:20:42 np0005466030 systemd[1]: var-lib-containers-storage-overlay-10a49f52618069d1ad7902e6a20a5811651f7f83eb122ae0dd39c392fa844ae3-merged.mount: Deactivated successfully.
Oct  2 09:20:42 np0005466030 podman[314964]: 2025-10-02 13:20:42.907523752 +0000 UTC m=+0.091597624 container cleanup a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466030 systemd[1]: libpod-conmon-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8.scope: Deactivated successfully.
Oct  2 09:20:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:42.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:42 np0005466030 podman[314986]: 2025-10-02 13:20:42.950291344 +0000 UTC m=+0.067907712 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:20:42 np0005466030 podman[315016]: 2025-10-02 13:20:42.971180819 +0000 UTC m=+0.040786190 container remove a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.976 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8801d96b-8057-470c-b29d-39e637a4bc5b]: (4, ('Thu Oct  2 01:20:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8)\na4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8\nThu Oct  2 01:20:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8)\na4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.977 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd067a2-e73e-4693-8799-327538f9da9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.978 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:20:42 np0005466030 kernel: tap540159ad-f0: left promiscuous mode
Oct  2 09:20:42 np0005466030 podman[314978]: 2025-10-02 13:20:42.981208004 +0000 UTC m=+0.102246789 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.996 2 DEBUG nova.compute.manager [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.997 2 DEBUG oslo_concurrency.lockutils [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.997 2 DEBUG oslo_concurrency.lockutils [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:42 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.997 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[be50dc9e-6413-4217-bc2a-3a2559e575e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.997 2 DEBUG oslo_concurrency.lockutils [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.998 2 DEBUG nova.compute.manager [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.998 2 WARNING nova.compute.manager [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state reboot_started.#033[00m
Oct  2 09:20:42 np0005466030 nova_compute[230518]: 2025-10-02 13:20:42.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.021 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f6b38-54b4-49b8-91ae-744f0ab9f25e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.022 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bccd195b-872a-42e1-a7e1-ddc0891e8ce1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.037 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[810325cb-43af-468f-bce3-916a53a3f74e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905638, 'reachable_time': 40903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315059, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 systemd[1]: run-netns-ovnmeta\x2d540159ad\x2dffd2\x2d462a\x2da8b9\x2de86914ed6249.mount: Deactivated successfully.
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.041 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.041 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[8536d839-8564-4e04-a2c6-90673ed0431d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 nova_compute[230518]: 2025-10-02 13:20:43.047 2 INFO nova.virt.libvirt.driver [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance shutdown successfully.#033[00m
Oct  2 09:20:43 np0005466030 kernel: tap585ce74b-9d: entered promiscuous mode
Oct  2 09:20:43 np0005466030 systemd-udevd[314942]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:20:43 np0005466030 NetworkManager[44960]: <info>  [1759411243.0995] manager: (tap585ce74b-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Oct  2 09:20:43 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:43Z|00877|binding|INFO|Claiming lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for this chassis.
Oct  2 09:20:43 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:43Z|00878|binding|INFO|585ce74b-9d9e-45eb-a324-9ce87a1fcec0: Claiming fa:16:3e:04:8b:f7 10.100.0.5
Oct  2 09:20:43 np0005466030 nova_compute[230518]: 2025-10-02 13:20:43.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.107 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e a3caf5f5-413d-4b29-98a6-4d3bfef93aba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.108 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 bound to our chassis#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.109 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 540159ad-ffd2-462a-a8b9-e86914ed6249#033[00m
Oct  2 09:20:43 np0005466030 NetworkManager[44960]: <info>  [1759411243.1107] device (tap585ce74b-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:20:43 np0005466030 NetworkManager[44960]: <info>  [1759411243.1113] device (tap585ce74b-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:20:43 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:43Z|00879|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 ovn-installed in OVS
Oct  2 09:20:43 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:43Z|00880|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 up in Southbound
Oct  2 09:20:43 np0005466030 nova_compute[230518]: 2025-10-02 13:20:43.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005466030 nova_compute[230518]: 2025-10-02 13:20:43.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.127 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[285167b5-10c9-469a-a527-d2d0fb06248c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.128 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap540159ad-f1 in ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.130 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap540159ad-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.130 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8709f-f059-41a6-8632-30f78c5dfaa8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.131 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4a0098-e8f1-4762-9957-0e4568851343]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.144 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2f505c-6d70-4b6f-8e6a-25afb6a76d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 systemd-machined[188247]: New machine qemu-99-instance-000000d6.
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.158 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01cebbf4-087f-4b64-9bf9-c15be447fa7b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 systemd[1]: Started Virtual Machine qemu-99-instance-000000d6.
Oct  2 09:20:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.190 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c30615-9cc0-4ad0-bc95-f9cebf49e7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 NetworkManager[44960]: <info>  [1759411243.1962] manager: (tap540159ad-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/403)
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.197 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee46ffed-e5bc-481b-8803-654b2b180464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.233 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c556c03d-2810-4167-b1ab-97cb18d81b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.236 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e0b361-8736-4ba0-b623-1ef9acd1cb6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 NetworkManager[44960]: <info>  [1759411243.2650] device (tap540159ad-f0): carrier: link connected
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.270 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3529c6e-f425-4636-9754-511fadf5ec5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.287 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e00367-35f8-4928-95a2-c8febadb5a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910682, 'reachable_time': 40880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315105, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.302 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[71e3f802-5826-49b2-a80c-7715f496f2bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:9bb7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 910682, 'tstamp': 910682}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315106, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e555137-b369-400d-9992-3fc4dfa96ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910682, 'reachable_time': 40880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315107, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a609b9fb-a41f-4e59-a4f3-5ae0679fbffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.402 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f06b36-78b9-4529-b45e-65454c5f1169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.404 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.404 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.404 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540159ad-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:20:43 np0005466030 nova_compute[230518]: 2025-10-02 13:20:43.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005466030 NetworkManager[44960]: <info>  [1759411243.4066] manager: (tap540159ad-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Oct  2 09:20:43 np0005466030 kernel: tap540159ad-f0: entered promiscuous mode
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.410 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap540159ad-f0, col_values=(('external_ids', {'iface-id': 'b64b1a3a-1d89-4a71-b9b0-71e964509167'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:20:43 np0005466030 nova_compute[230518]: 2025-10-02 13:20:43.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:43Z|00881|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct  2 09:20:43 np0005466030 nova_compute[230518]: 2025-10-02 13:20:43.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.423 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.424 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1f136446-d436-44e5-9e90-ff2daf873a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.425 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-540159ad-ffd2-462a-a8b9-e86914ed6249
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:20:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.425 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'env', 'PROCESS_TAG=haproxy-540159ad-ffd2-462a-a8b9-e86914ed6249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/540159ad-ffd2-462a-a8b9-e86914ed6249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:20:43 np0005466030 podman[315175]: 2025-10-02 13:20:43.773859747 +0000 UTC m=+0.051203267 container create 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:20:43 np0005466030 systemd[1]: Started libpod-conmon-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c.scope.
Oct  2 09:20:43 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:20:43 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec57861729784fbfedd5a40971eeacc0f72b749ec4f97def4c07dfc1fd8ea29/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:20:43 np0005466030 podman[315175]: 2025-10-02 13:20:43.747459839 +0000 UTC m=+0.024803379 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:20:43 np0005466030 podman[315175]: 2025-10-02 13:20:43.850323115 +0000 UTC m=+0.127666635 container init 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:20:43 np0005466030 podman[315175]: 2025-10-02 13:20:43.85558913 +0000 UTC m=+0.132932670 container start 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct  2 09:20:43 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : New worker (315219) forked
Oct  2 09:20:43 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : Loading success.
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.300 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 92ef7ede-4ed2-4a81-9849-bbc39c0be573 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.300 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411244.2979326, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.301 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.306 2 INFO nova.virt.libvirt.driver [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance running successfully.#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.307 2 INFO nova.virt.libvirt.driver [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance soft rebooted successfully.#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.307 2 DEBUG nova.compute.manager [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.330 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.333 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.361 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 21.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.363 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.363 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411244.2996092, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.363 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Started (Lifecycle Event)#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.382 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:20:44 np0005466030 nova_compute[230518]: 2025-10-02 13:20:44.386 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:20:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:44.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.138 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.138 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.138 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.139 2 WARNING nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.140 2 WARNING nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.141 2 WARNING nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:20:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.250 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.251 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.251 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:20:45 np0005466030 nova_compute[230518]: 2025-10-02 13:20:45.251 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:20:46 np0005466030 nova_compute[230518]: 2025-10-02 13:20:46.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:46 np0005466030 nova_compute[230518]: 2025-10-02 13:20:46.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:46.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:47 np0005466030 nova_compute[230518]: 2025-10-02 13:20:47.060 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:20:47 np0005466030 nova_compute[230518]: 2025-10-02 13:20:47.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:20:47 np0005466030 nova_compute[230518]: 2025-10-02 13:20:47.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:20:47 np0005466030 nova_compute[230518]: 2025-10-02 13:20:47.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:48.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:49 np0005466030 nova_compute[230518]: 2025-10-02 13:20:49.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:49 np0005466030 nova_compute[230518]: 2025-10-02 13:20:49.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:20:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:49.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:50.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:51 np0005466030 nova_compute[230518]: 2025-10-02 13:20:51.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:51.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:51 np0005466030 nova_compute[230518]: 2025-10-02 13:20:51.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:52.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:20:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:20:53 np0005466030 podman[315230]: 2025-10-02 13:20:53.819445437 +0000 UTC m=+0.064827555 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:20:53 np0005466030 podman[315229]: 2025-10-02 13:20:53.849317074 +0000 UTC m=+0.091345367 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 09:20:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:20:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:54.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:55.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:56 np0005466030 nova_compute[230518]: 2025-10-02 13:20:56.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:56 np0005466030 ovn_controller[129257]: 2025-10-02T13:20:56Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:8b:f7 10.100.0.5
Oct  2 09:20:56 np0005466030 nova_compute[230518]: 2025-10-02 13:20:56.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:20:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:56.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:58.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:20:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:20:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:59.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:20:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:00.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:01 np0005466030 nova_compute[230518]: 2025-10-02 13:21:01.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:01.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:01 np0005466030 nova_compute[230518]: 2025-10-02 13:21:01.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:02 np0005466030 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG nova.compute.manager [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:21:02 np0005466030 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG nova.compute.manager [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:21:02 np0005466030 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG oslo_concurrency.lockutils [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:21:02 np0005466030 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG oslo_concurrency.lockutils [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:21:02 np0005466030 nova_compute[230518]: 2025-10-02 13:21:02.851 2 DEBUG nova.network.neutron [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:21:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:02.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:03.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:04.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:05.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:06 np0005466030 nova_compute[230518]: 2025-10-02 13:21:06.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:06 np0005466030 nova_compute[230518]: 2025-10-02 13:21:06.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:06 np0005466030 nova_compute[230518]: 2025-10-02 13:21:06.851 2 DEBUG nova.network.neutron [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:21:06 np0005466030 nova_compute[230518]: 2025-10-02 13:21:06.851 2 DEBUG nova.network.neutron [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:21:06 np0005466030 nova_compute[230518]: 2025-10-02 13:21:06.892 2 DEBUG oslo_concurrency.lockutils [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:21:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:08.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:21:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:09.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:21:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:09.375 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:21:09 np0005466030 nova_compute[230518]: 2025-10-02 13:21:09.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:09.377 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:21:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:09 np0005466030 nova_compute[230518]: 2025-10-02 13:21:09.733 2 DEBUG nova.compute.manager [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:21:09 np0005466030 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG nova.compute.manager [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:21:09 np0005466030 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG oslo_concurrency.lockutils [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:21:09 np0005466030 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG oslo_concurrency.lockutils [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:21:09 np0005466030 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG nova.network.neutron [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.608 2 DEBUG oslo_concurrency.lockutils [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.609 2 DEBUG oslo_concurrency.lockutils [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.622 2 INFO nova.compute.manager [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Detaching volume 501f7163-061f-4829-9c05-ac69ebd0ace5#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.735 2 INFO nova.virt.block_device [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attempting to driver detach volume 501f7163-061f-4829-9c05-ac69ebd0ace5 from mountpoint /dev/vdb#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.749 2 DEBUG nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Attempting to detach device vdb from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.750 2 DEBUG nova.virt.libvirt.guest [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-501f7163-061f-4829-9c05-ac69ebd0ace5">
Oct  2 09:21:10 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <serial>501f7163-061f-4829-9c05-ac69ebd0ace5</serial>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:21:10 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.760 2 INFO nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully detached device vdb from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the persistent domain config.#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.761 2 DEBUG nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.762 2 DEBUG nova.virt.libvirt.guest [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] detach device xml: <disk type="network" device="disk">
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <source protocol="rbd" name="volumes/volume-501f7163-061f-4829-9c05-ac69ebd0ace5">
Oct  2 09:21:10 np0005466030 nova_compute[230518]:    <host name="192.168.122.100" port="6789"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:    <host name="192.168.122.102" port="6789"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:    <host name="192.168.122.101" port="6789"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  </source>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <target dev="vdb" bus="virtio"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <serial>501f7163-061f-4829-9c05-ac69ebd0ace5</serial>
Oct  2 09:21:10 np0005466030 nova_compute[230518]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct  2 09:21:10 np0005466030 nova_compute[230518]: </disk>
Oct  2 09:21:10 np0005466030 nova_compute[230518]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.816 2 DEBUG nova.network.neutron [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.827 2 DEBUG nova.network.neutron [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.842 2 DEBUG oslo_concurrency.lockutils [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.888 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759411270.887751, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.890 2 DEBUG nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  2 09:21:10 np0005466030 nova_compute[230518]: 2025-10-02 13:21:10.894 2 INFO nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully detached device vdb from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the live domain config.#033[00m
Oct  2 09:21:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:11 np0005466030 nova_compute[230518]: 2025-10-02 13:21:11.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:11 np0005466030 nova_compute[230518]: 2025-10-02 13:21:11.045 2 DEBUG nova.objects.instance [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:21:11 np0005466030 nova_compute[230518]: 2025-10-02 13:21:11.080 2 DEBUG oslo_concurrency.lockutils [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:11.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:11 np0005466030 nova_compute[230518]: 2025-10-02 13:21:11.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:21:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:21:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:21:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:21:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:12.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:13.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.521 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.521 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.522 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.522 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.523 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.525 2 INFO nova.compute.manager [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Terminating instance#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.527 2 DEBUG nova.compute.manager [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:21:13 np0005466030 kernel: tap585ce74b-9d (unregistering): left promiscuous mode
Oct  2 09:21:13 np0005466030 NetworkManager[44960]: <info>  [1759411273.5824] device (tap585ce74b-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:13 np0005466030 ovn_controller[129257]: 2025-10-02T13:21:13Z|00882|binding|INFO|Releasing lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 from this chassis (sb_readonly=0)
Oct  2 09:21:13 np0005466030 ovn_controller[129257]: 2025-10-02T13:21:13Z|00883|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 down in Southbound
Oct  2 09:21:13 np0005466030 ovn_controller[129257]: 2025-10-02T13:21:13Z|00884|binding|INFO|Removing iface tap585ce74b-9d ovn-installed in OVS
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.602 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.605 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 unbound from our chassis#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.608 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 540159ad-ffd2-462a-a8b9-e86914ed6249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.610 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdbbbea-b84d-4790-96be-bad14ccfce5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.611 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace which is not needed anymore#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:13 np0005466030 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Oct  2 09:21:13 np0005466030 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000d6.scope: Consumed 13.678s CPU time.
Oct  2 09:21:13 np0005466030 systemd-machined[188247]: Machine qemu-99-instance-000000d6 terminated.
Oct  2 09:21:13 np0005466030 podman[315277]: 2025-10-02 13:21:13.669687934 +0000 UTC m=+0.057517806 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:21:13 np0005466030 podman[315274]: 2025-10-02 13:21:13.706034974 +0000 UTC m=+0.094350270 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:21:13 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : haproxy version is 2.8.14-c23fe91
Oct  2 09:21:13 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : path to executable is /usr/sbin/haproxy
Oct  2 09:21:13 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [WARNING]  (315217) : Exiting Master process...
Oct  2 09:21:13 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [WARNING]  (315217) : Exiting Master process...
Oct  2 09:21:13 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [ALERT]    (315217) : Current worker (315219) exited with code 143 (Terminated)
Oct  2 09:21:13 np0005466030 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [WARNING]  (315217) : All workers exited. Exiting... (0)
Oct  2 09:21:13 np0005466030 systemd[1]: libpod-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c.scope: Deactivated successfully.
Oct  2 09:21:13 np0005466030 podman[315340]: 2025-10-02 13:21:13.760617756 +0000 UTC m=+0.052584870 container died 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.761 2 INFO nova.virt.libvirt.driver [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance destroyed successfully.#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.762 2 DEBUG nova.objects.instance [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'resources' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.779 2 DEBUG nova.virt.libvirt.vif [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:19:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2037893726',display_name='tempest-TestMinimumBasicScenario-server-2037893726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2037893726',id=214,image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwVl8G5ieo8D6LRNceGyD0RzVHiNmAhn+oNx9JwxYWBR403mrZfQlZXBcadX/gFJtwpWDcUYsJ9PDNLmwgBCuRs7yyL95+8n31Ih8AeyaGOYLATIt1ABWixcUbVaElI8A==',key_name='tempest-TestMinimumBasicScenario-635426937',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:19:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-joxmr5lv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:20:44Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=92ef7ede-4ed2-4a81-9849-bbc39c0be573,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.779 2 DEBUG nova.network.os_vif_util [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.780 2 DEBUG nova.network.os_vif_util [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.781 2 DEBUG os_vif [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585ce74b-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:21:13 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c-userdata-shm.mount: Deactivated successfully.
Oct  2 09:21:13 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7ec57861729784fbfedd5a40971eeacc0f72b749ec4f97def4c07dfc1fd8ea29-merged.mount: Deactivated successfully.
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.803 2 INFO os_vif [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d')#033[00m
Oct  2 09:21:13 np0005466030 podman[315340]: 2025-10-02 13:21:13.809400506 +0000 UTC m=+0.101367640 container cleanup 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  2 09:21:13 np0005466030 systemd[1]: libpod-conmon-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c.scope: Deactivated successfully.
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.867 2 DEBUG nova.compute.manager [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.867 2 DEBUG oslo_concurrency.lockutils [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.867 2 DEBUG oslo_concurrency.lockutils [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.868 2 DEBUG oslo_concurrency.lockutils [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.868 2 DEBUG nova.compute.manager [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.868 2 DEBUG nova.compute.manager [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:21:13 np0005466030 podman[315396]: 2025-10-02 13:21:13.885564545 +0000 UTC m=+0.049572866 container remove 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c97a3f4-a9be-4208-88fe-574d069a5c6a]: (4, ('Thu Oct  2 01:21:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c)\n0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c\nThu Oct  2 01:21:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c)\n0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.894 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[855755dd-4eb7-46a5-9cb1-5740632a49b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:13 np0005466030 kernel: tap540159ad-f0: left promiscuous mode
Oct  2 09:21:13 np0005466030 nova_compute[230518]: 2025-10-02 13:21:13.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.911 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b71bd91-8cbc-4003-88ac-6d0da21c08fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.949 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[37c933a2-8726-48d1-891d-f799daf63040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.951 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c75d1095-41e2-4f35-91f2-2d46912bc282]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.967 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[679a3a29-41fe-4794-8cf3-83627857d075]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910674, 'reachable_time': 29141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315414, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:13 np0005466030 systemd[1]: run-netns-ovnmeta\x2d540159ad\x2dffd2\x2d462a\x2da8b9\x2de86914ed6249.mount: Deactivated successfully.
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.974 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:21:13 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.974 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[78b8b3d8-0684-4373-b3d4-df3b1f1b8762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:21:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:14 np0005466030 nova_compute[230518]: 2025-10-02 13:21:14.789 2 INFO nova.virt.libvirt.driver [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deleting instance files /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573_del#033[00m
Oct  2 09:21:14 np0005466030 nova_compute[230518]: 2025-10-02 13:21:14.790 2 INFO nova.virt.libvirt.driver [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deletion of /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573_del complete#033[00m
Oct  2 09:21:14 np0005466030 nova_compute[230518]: 2025-10-02 13:21:14.848 2 INFO nova.compute.manager [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 1.32 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:21:14 np0005466030 nova_compute[230518]: 2025-10-02 13:21:14.849 2 DEBUG oslo.service.loopingcall [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:21:14 np0005466030 nova_compute[230518]: 2025-10-02 13:21:14.849 2 DEBUG nova.compute.manager [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:21:14 np0005466030 nova_compute[230518]: 2025-10-02 13:21:14.850 2 DEBUG nova.network.neutron [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:21:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:15.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:15 np0005466030 nova_compute[230518]: 2025-10-02 13:21:15.948 2 DEBUG nova.compute.manager [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:21:15 np0005466030 nova_compute[230518]: 2025-10-02 13:21:15.948 2 DEBUG oslo_concurrency.lockutils [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:15 np0005466030 nova_compute[230518]: 2025-10-02 13:21:15.949 2 DEBUG oslo_concurrency.lockutils [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:15 np0005466030 nova_compute[230518]: 2025-10-02 13:21:15.949 2 DEBUG oslo_concurrency.lockutils [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:15 np0005466030 nova_compute[230518]: 2025-10-02 13:21:15.949 2 DEBUG nova.compute.manager [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:21:15 np0005466030 nova_compute[230518]: 2025-10-02 13:21:15.950 2 WARNING nova.compute.manager [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:21:16 np0005466030 nova_compute[230518]: 2025-10-02 13:21:16.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:16 np0005466030 nova_compute[230518]: 2025-10-02 13:21:16.906 2 DEBUG nova.network.neutron [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:21:16 np0005466030 nova_compute[230518]: 2025-10-02 13:21:16.927 2 INFO nova.compute.manager [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 2.08 seconds to deallocate network for instance.#033[00m
Oct  2 09:21:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:16 np0005466030 nova_compute[230518]: 2025-10-02 13:21:16.982 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:16 np0005466030 nova_compute[230518]: 2025-10-02 13:21:16.982 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:16.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.020 2 DEBUG nova.compute.manager [req-a2a922d8-6ea6-457b-b10a-f4e14910fb11 req-601cd08a-2389-4c05-be39-be18125786ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-deleted-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.045 2 DEBUG oslo_concurrency.processutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:21:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:17 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:17.379 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:21:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:21:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2487810029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.467 2 DEBUG oslo_concurrency.processutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.475 2 DEBUG nova.compute.provider_tree [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.494 2 DEBUG nova.scheduler.client.report [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.528 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.558 2 INFO nova.scheduler.client.report [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Deleted allocations for instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573#033[00m
Oct  2 09:21:17 np0005466030 nova_compute[230518]: 2025-10-02 13:21:17.624 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:18 np0005466030 nova_compute[230518]: 2025-10-02 13:21:18.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Oct  2 09:21:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:21.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:21 np0005466030 nova_compute[230518]: 2025-10-02 13:21:21.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:23.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:23 np0005466030 nova_compute[230518]: 2025-10-02 13:21:23.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:23 np0005466030 nova_compute[230518]: 2025-10-02 13:21:23.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:23 np0005466030 nova_compute[230518]: 2025-10-02 13:21:23.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:24 np0005466030 podman[315440]: 2025-10-02 13:21:24.793898826 +0000 UTC m=+0.047879263 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:21:24 np0005466030 podman[315439]: 2025-10-02 13:21:24.832133145 +0000 UTC m=+0.085413930 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:21:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:25.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:25.979 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:25.979 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:25.980 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Oct  2 09:21:26 np0005466030 nova_compute[230518]: 2025-10-02 13:21:26.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:27.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:28 np0005466030 nova_compute[230518]: 2025-10-02 13:21:28.761 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411273.759324, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:21:28 np0005466030 nova_compute[230518]: 2025-10-02 13:21:28.761 2 INFO nova.compute.manager [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:21:28 np0005466030 nova_compute[230518]: 2025-10-02 13:21:28.782 2 DEBUG nova.compute.manager [None req-805e5f6c-0c16-4340-9fcc-d205d209c69c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:21:28 np0005466030 nova_compute[230518]: 2025-10-02 13:21:28.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:28.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:21:31 np0005466030 nova_compute[230518]: 2025-10-02 13:21:31.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:33.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.075 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:21:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:33.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:21:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1768222646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.512 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.684 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.685 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4209MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.685 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.685 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.775 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.775 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.814 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.830 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.830 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.848 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.869 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:21:33 np0005466030 nova_compute[230518]: 2025-10-02 13:21:33.892 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:21:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:21:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/479049458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:21:34 np0005466030 nova_compute[230518]: 2025-10-02 13:21:34.320 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:21:34 np0005466030 nova_compute[230518]: 2025-10-02 13:21:34.326 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:21:34 np0005466030 nova_compute[230518]: 2025-10-02 13:21:34.341 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:21:34 np0005466030 nova_compute[230518]: 2025-10-02 13:21:34.360 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:21:34 np0005466030 nova_compute[230518]: 2025-10-02 13:21:34.360 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:21:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:35.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:36 np0005466030 nova_compute[230518]: 2025-10-02 13:21:36.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:37.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:37.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:38 np0005466030 nova_compute[230518]: 2025-10-02 13:21:38.361 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:38 np0005466030 nova_compute[230518]: 2025-10-02 13:21:38.361 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:21:38 np0005466030 nova_compute[230518]: 2025-10-02 13:21:38.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:39.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:39.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:21:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:40 np0005466030 nova_compute[230518]: 2025-10-02 13:21:40.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:40 np0005466030 nova_compute[230518]: 2025-10-02 13:21:40.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:41.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:41.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:41 np0005466030 nova_compute[230518]: 2025-10-02 13:21:41.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:43.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:43 np0005466030 nova_compute[230518]: 2025-10-02 13:21:43.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:43 np0005466030 nova_compute[230518]: 2025-10-02 13:21:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:43.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:43 np0005466030 podman[315706]: 2025-10-02 13:21:43.797897166 +0000 UTC m=+0.044791376 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:21:43 np0005466030 nova_compute[230518]: 2025-10-02 13:21:43.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:43 np0005466030 podman[315705]: 2025-10-02 13:21:43.852670444 +0000 UTC m=+0.100770062 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:21:44 np0005466030 nova_compute[230518]: 2025-10-02 13:21:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:45.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:45.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:46 np0005466030 nova_compute[230518]: 2025-10-02 13:21:46.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:46 np0005466030 nova_compute[230518]: 2025-10-02 13:21:46.063 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:21:46 np0005466030 nova_compute[230518]: 2025-10-02 13:21:46.063 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:21:46 np0005466030 nova_compute[230518]: 2025-10-02 13:21:46.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:21:46 np0005466030 nova_compute[230518]: 2025-10-02 13:21:46.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:47.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:47 np0005466030 nova_compute[230518]: 2025-10-02 13:21:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:47.670 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:21:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:47.671 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:21:47 np0005466030 nova_compute[230518]: 2025-10-02 13:21:47.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:48 np0005466030 nova_compute[230518]: 2025-10-02 13:21:48.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:50 np0005466030 nova_compute[230518]: 2025-10-02 13:21:50.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:50 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:21:50.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:21:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:51.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:21:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:21:51 np0005466030 nova_compute[230518]: 2025-10-02 13:21:51.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:53.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:53.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:53 np0005466030 nova_compute[230518]: 2025-10-02 13:21:53.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:21:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:55.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:55 np0005466030 podman[315747]: 2025-10-02 13:21:55.810255328 +0000 UTC m=+0.065497115 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:21:55 np0005466030 podman[315748]: 2025-10-02 13:21:55.850051306 +0000 UTC m=+0.091904963 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:21:56 np0005466030 nova_compute[230518]: 2025-10-02 13:21:56.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:57.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:57 np0005466030 nova_compute[230518]: 2025-10-02 13:21:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:21:57 np0005466030 nova_compute[230518]: 2025-10-02 13:21:57.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:21:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:57.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:58 np0005466030 nova_compute[230518]: 2025-10-02 13:21:58.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:21:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:21:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:59.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:21:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:21:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:21:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:59.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:21:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:01.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:01.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:01 np0005466030 nova_compute[230518]: 2025-10-02 13:22:01.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:03.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:03.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:03 np0005466030 nova_compute[230518]: 2025-10-02 13:22:03.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:05.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:22:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:22:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:22:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:22:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:06 np0005466030 nova_compute[230518]: 2025-10-02 13:22:06.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:07.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:07.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:08 np0005466030 nova_compute[230518]: 2025-10-02 13:22:08.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:09.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:11.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:11.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:11 np0005466030 nova_compute[230518]: 2025-10-02 13:22:11.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:13.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:13 np0005466030 nova_compute[230518]: 2025-10-02 13:22:13.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:14 np0005466030 ovn_controller[129257]: 2025-10-02T13:22:14Z|00885|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  2 09:22:14 np0005466030 podman[315789]: 2025-10-02 13:22:14.807520227 +0000 UTC m=+0.054349015 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct  2 09:22:14 np0005466030 podman[315788]: 2025-10-02 13:22:14.877125821 +0000 UTC m=+0.119054116 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:22:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:15.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:16 np0005466030 nova_compute[230518]: 2025-10-02 13:22:16.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999981s ======
Oct  2 09:22:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:17.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999981s
Oct  2 09:22:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:18 np0005466030 nova_compute[230518]: 2025-10-02 13:22:18.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:19.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:19.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:21.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:21 np0005466030 nova_compute[230518]: 2025-10-02 13:22:21.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:23.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:23.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:23 np0005466030 nova_compute[230518]: 2025-10-02 13:22:23.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:25.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:25.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:22:25.980 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:22:25.980 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:22:25.981 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:26 np0005466030 podman[315831]: 2025-10-02 13:22:26.813360655 +0000 UTC m=+0.065489765 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:22:26 np0005466030 podman[315830]: 2025-10-02 13:22:26.82083452 +0000 UTC m=+0.078168004 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:22:26 np0005466030 nova_compute[230518]: 2025-10-02 13:22:26.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:27 np0005466030 nova_compute[230518]: 2025-10-02 13:22:27.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:27 np0005466030 nova_compute[230518]: 2025-10-02 13:22:27.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:22:27 np0005466030 nova_compute[230518]: 2025-10-02 13:22:27.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:22:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:27.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:27.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:28 np0005466030 nova_compute[230518]: 2025-10-02 13:22:28.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:29.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:29.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:22:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:22:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:31.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:31 np0005466030 nova_compute[230518]: 2025-10-02 13:22:31.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:33.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:33 np0005466030 nova_compute[230518]: 2025-10-02 13:22:33.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.067 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.099 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.099 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:22:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:22:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/520594306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.550 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.711 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.712 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4193MB free_disk=20.95783233642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.712 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.713 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.773 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:22:34 np0005466030 nova_compute[230518]: 2025-10-02 13:22:34.788 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:22:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:22:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:35.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:22:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:22:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1752492837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:22:35 np0005466030 nova_compute[230518]: 2025-10-02 13:22:35.250 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:22:35 np0005466030 nova_compute[230518]: 2025-10-02 13:22:35.255 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:22:35 np0005466030 nova_compute[230518]: 2025-10-02 13:22:35.269 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:22:35 np0005466030 nova_compute[230518]: 2025-10-02 13:22:35.270 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:22:35 np0005466030 nova_compute[230518]: 2025-10-02 13:22:35.271 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:22:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:35.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:36 np0005466030 nova_compute[230518]: 2025-10-02 13:22:36.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:37.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:37.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:38 np0005466030 nova_compute[230518]: 2025-10-02 13:22:38.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:39.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:39 np0005466030 nova_compute[230518]: 2025-10-02 13:22:39.256 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:39 np0005466030 nova_compute[230518]: 2025-10-02 13:22:39.257 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:22:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:40 np0005466030 nova_compute[230518]: 2025-10-02 13:22:40.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:41.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:22:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:22:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:22:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:41.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:41 np0005466030 nova_compute[230518]: 2025-10-02 13:22:41.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:42 np0005466030 nova_compute[230518]: 2025-10-02 13:22:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:43.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:43 np0005466030 nova_compute[230518]: 2025-10-02 13:22:43.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:45 np0005466030 nova_compute[230518]: 2025-10-02 13:22:45.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:45 np0005466030 nova_compute[230518]: 2025-10-02 13:22:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:45 np0005466030 podman[316047]: 2025-10-02 13:22:45.859610927 +0000 UTC m=+0.096180597 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:22:45 np0005466030 podman[316046]: 2025-10-02 13:22:45.86797469 +0000 UTC m=+0.107141202 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:22:46 np0005466030 nova_compute[230518]: 2025-10-02 13:22:46.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:47 np0005466030 nova_compute[230518]: 2025-10-02 13:22:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:22:47 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:22:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:47.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:48 np0005466030 nova_compute[230518]: 2025-10-02 13:22:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:48 np0005466030 nova_compute[230518]: 2025-10-02 13:22:48.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:22:48 np0005466030 nova_compute[230518]: 2025-10-02 13:22:48.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:22:48 np0005466030 nova_compute[230518]: 2025-10-02 13:22:48.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:49.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:49 np0005466030 nova_compute[230518]: 2025-10-02 13:22:49.504 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:22:50 np0005466030 nova_compute[230518]: 2025-10-02 13:22:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:50 np0005466030 nova_compute[230518]: 2025-10-02 13:22:50.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:22:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:51.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:22:51.904 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:22:51 np0005466030 nova_compute[230518]: 2025-10-02 13:22:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:22:51.905 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:22:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:22:51.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:22:51 np0005466030 nova_compute[230518]: 2025-10-02 13:22:51.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:53.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:53 np0005466030 nova_compute[230518]: 2025-10-02 13:22:53.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:22:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:55.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:56 np0005466030 nova_compute[230518]: 2025-10-02 13:22:56.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:22:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:57.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:22:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:57.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:57 np0005466030 podman[316141]: 2025-10-02 13:22:57.80700527 +0000 UTC m=+0.059449965 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3)
Oct  2 09:22:57 np0005466030 podman[316142]: 2025-10-02 13:22:57.835904827 +0000 UTC m=+0.082522320 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 09:22:58 np0005466030 nova_compute[230518]: 2025-10-02 13:22:58.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:22:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:22:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:22:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:59.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:22:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:01 np0005466030 nova_compute[230518]: 2025-10-02 13:23:01.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:03.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:03 np0005466030 nova_compute[230518]: 2025-10-02 13:23:03.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:05.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:05.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:23:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:23:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:23:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:23:06 np0005466030 nova_compute[230518]: 2025-10-02 13:23:06.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:07.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:08 np0005466030 nova_compute[230518]: 2025-10-02 13:23:08.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:09.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:11.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:11.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:11 np0005466030 nova_compute[230518]: 2025-10-02 13:23:11.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:13.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:13.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:14 np0005466030 nova_compute[230518]: 2025-10-02 13:23:14.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:15.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:16 np0005466030 podman[316181]: 2025-10-02 13:23:16.835108434 +0000 UTC m=+0.064011118 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:23:16 np0005466030 podman[316180]: 2025-10-02 13:23:16.851983263 +0000 UTC m=+0.089114886 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:23:16 np0005466030 nova_compute[230518]: 2025-10-02 13:23:16.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:17.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:19 np0005466030 nova_compute[230518]: 2025-10-02 13:23:19.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:19.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:19.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.722727) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401722782, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1838, "num_deletes": 258, "total_data_size": 4363954, "memory_usage": 4432784, "flush_reason": "Manual Compaction"}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401781083, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2870936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79085, "largest_seqno": 80918, "table_properties": {"data_size": 2863262, "index_size": 4552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15894, "raw_average_key_size": 19, "raw_value_size": 2847854, "raw_average_value_size": 3582, "num_data_blocks": 200, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411236, "oldest_key_time": 1759411236, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 58404 microseconds, and 6326 cpu microseconds.
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.781136) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2870936 bytes OK
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.781159) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.802978) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.803038) EVENT_LOG_v1 {"time_micros": 1759411401803022, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.803071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 4355594, prev total WAL file size 4355594, number of live WAL files 2.
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.805120) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303138' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2803KB)], [162(11MB)]
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401805172, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 14893129, "oldest_snapshot_seqno": -1}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10243 keys, 14750365 bytes, temperature: kUnknown
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401945000, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 14750365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14682186, "index_size": 41442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 269755, "raw_average_key_size": 26, "raw_value_size": 14501102, "raw_average_value_size": 1415, "num_data_blocks": 1589, "num_entries": 10243, "num_filter_entries": 10243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:23:21 np0005466030 nova_compute[230518]: 2025-10-02 13:23:21.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.945431) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14750365 bytes
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.947296) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.3 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.5 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 10778, records dropped: 535 output_compression: NoCompression
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.947315) EVENT_LOG_v1 {"time_micros": 1759411401947306, "job": 104, "event": "compaction_finished", "compaction_time_micros": 140046, "compaction_time_cpu_micros": 48258, "output_level": 6, "num_output_files": 1, "total_output_size": 14750365, "num_input_records": 10778, "num_output_records": 10243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401948448, "job": 104, "event": "table_file_deletion", "file_number": 164}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401950510, "job": 104, "event": "table_file_deletion", "file_number": 162}
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.805027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:21 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:23:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:23.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:23.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:24 np0005466030 nova_compute[230518]: 2025-10-02 13:23:24.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:25.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:25.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:23:25.981 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:23:25.982 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:23:25.982 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:26 np0005466030 nova_compute[230518]: 2025-10-02 13:23:26.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:27.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:27.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:28 np0005466030 podman[316225]: 2025-10-02 13:23:28.806057997 +0000 UTC m=+0.051823777 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  2 09:23:28 np0005466030 podman[316224]: 2025-10-02 13:23:28.852455452 +0000 UTC m=+0.093000458 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:23:29 np0005466030 nova_compute[230518]: 2025-10-02 13:23:29.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:23:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:23:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:23:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:23:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:31.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:31 np0005466030 nova_compute[230518]: 2025-10-02 13:23:31.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:33.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:34 np0005466030 nova_compute[230518]: 2025-10-02 13:23:34.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.076 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:23:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:23:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3013848652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.587 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.728 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.729 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4196MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.810 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.811 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:23:35 np0005466030 nova_compute[230518]: 2025-10-02 13:23:35.826 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:23:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:23:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3858995251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:23:36 np0005466030 nova_compute[230518]: 2025-10-02 13:23:36.237 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:23:36 np0005466030 nova_compute[230518]: 2025-10-02 13:23:36.243 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:23:36 np0005466030 nova_compute[230518]: 2025-10-02 13:23:36.257 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:23:36 np0005466030 nova_compute[230518]: 2025-10-02 13:23:36.258 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:23:36 np0005466030 nova_compute[230518]: 2025-10-02 13:23:36.259 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:23:36 np0005466030 nova_compute[230518]: 2025-10-02 13:23:36.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:39 np0005466030 nova_compute[230518]: 2025-10-02 13:23:39.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:39.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:39 np0005466030 nova_compute[230518]: 2025-10-02 13:23:39.259 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:39 np0005466030 nova_compute[230518]: 2025-10-02 13:23:39.260 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:23:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:39.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:41 np0005466030 nova_compute[230518]: 2025-10-02 13:23:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:41.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:41 np0005466030 nova_compute[230518]: 2025-10-02 13:23:41.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:43.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:43.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:44 np0005466030 nova_compute[230518]: 2025-10-02 13:23:44.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:44 np0005466030 nova_compute[230518]: 2025-10-02 13:23:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:45 np0005466030 nova_compute[230518]: 2025-10-02 13:23:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:45.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:46 np0005466030 nova_compute[230518]: 2025-10-02 13:23:46.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:46 np0005466030 podman[316332]: 2025-10-02 13:23:46.991339706 +0000 UTC m=+0.078870845 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  2 09:23:46 np0005466030 podman[316331]: 2025-10-02 13:23:46.991505921 +0000 UTC m=+0.082010983 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:23:47 np0005466030 nova_compute[230518]: 2025-10-02 13:23:47.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:47 np0005466030 nova_compute[230518]: 2025-10-02 13:23:47.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:47.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:23:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:23:47 np0005466030 podman[316521]: 2025-10-02 13:23:47.604479579 +0000 UTC m=+0.063221065 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  2 09:23:47 np0005466030 podman[316521]: 2025-10-02 13:23:47.699376446 +0000 UTC m=+0.158117912 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  2 09:23:49 np0005466030 nova_compute[230518]: 2025-10-02 13:23:49.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:49 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:50 np0005466030 nova_compute[230518]: 2025-10-02 13:23:50.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:50 np0005466030 nova_compute[230518]: 2025-10-02 13:23:50.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:23:50 np0005466030 nova_compute[230518]: 2025-10-02 13:23:50.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:23:50 np0005466030 nova_compute[230518]: 2025-10-02 13:23:50.069 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:23:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:23:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:23:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:23:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:51.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:51 np0005466030 nova_compute[230518]: 2025-10-02 13:23:51.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:52 np0005466030 nova_compute[230518]: 2025-10-02 13:23:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:23:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:23:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:53.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:23:54 np0005466030 nova_compute[230518]: 2025-10-02 13:23:54.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:55.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:55.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:57 np0005466030 nova_compute[230518]: 2025-10-02 13:23:57.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:57.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:59 np0005466030 nova_compute[230518]: 2025-10-02 13:23:59.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:23:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:23:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:23:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:23:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:23:59 np0005466030 podman[316777]: 2025-10-02 13:23:59.804838625 +0000 UTC m=+0.050050781 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:23:59 np0005466030 podman[316776]: 2025-10-02 13:23:59.828675563 +0000 UTC m=+0.077167901 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct  2 09:24:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:24:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:24:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:01.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:02 np0005466030 nova_compute[230518]: 2025-10-02 13:24:02.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:03.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:04 np0005466030 nova_compute[230518]: 2025-10-02 13:24:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:05.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:24:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:24:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:24:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:24:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:07 np0005466030 nova_compute[230518]: 2025-10-02 13:24:07.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:07.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:09 np0005466030 nova_compute[230518]: 2025-10-02 13:24:09.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:09.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:09.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.057503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450057526, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 769, "num_deletes": 251, "total_data_size": 1410515, "memory_usage": 1426800, "flush_reason": "Manual Compaction"}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450063368, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 920236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80923, "largest_seqno": 81687, "table_properties": {"data_size": 916494, "index_size": 1521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8595, "raw_average_key_size": 19, "raw_value_size": 909023, "raw_average_value_size": 2080, "num_data_blocks": 66, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411402, "oldest_key_time": 1759411402, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 5896 microseconds, and 2904 cpu microseconds.
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.063397) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 920236 bytes OK
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.063414) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064353) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064364) EVENT_LOG_v1 {"time_micros": 1759411450064361, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064378) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1406429, prev total WAL file size 1406429, number of live WAL files 2.
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064915) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(898KB)], [165(14MB)]
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450064940, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 15670601, "oldest_snapshot_seqno": -1}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10163 keys, 13669009 bytes, temperature: kUnknown
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450151245, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13669009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13602511, "index_size": 40017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 268780, "raw_average_key_size": 26, "raw_value_size": 13423792, "raw_average_value_size": 1320, "num_data_blocks": 1523, "num_entries": 10163, "num_filter_entries": 10163, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.151609) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13669009 bytes
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.153210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.3 rd, 158.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.1 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(31.9) write-amplify(14.9) OK, records in: 10680, records dropped: 517 output_compression: NoCompression
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.153257) EVENT_LOG_v1 {"time_micros": 1759411450153221, "job": 106, "event": "compaction_finished", "compaction_time_micros": 86425, "compaction_time_cpu_micros": 29734, "output_level": 6, "num_output_files": 1, "total_output_size": 13669009, "num_input_records": 10680, "num_output_records": 10163, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450153629, "job": 106, "event": "table_file_deletion", "file_number": 167}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450157396, "job": 106, "event": "table_file_deletion", "file_number": 165}
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:10 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:24:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:11.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:12 np0005466030 nova_compute[230518]: 2025-10-02 13:24:12.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:13.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:14 np0005466030 nova_compute[230518]: 2025-10-02 13:24:14.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:15.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Oct  2 09:24:17 np0005466030 nova_compute[230518]: 2025-10-02 13:24:17.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:17.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:17 np0005466030 podman[316869]: 2025-10-02 13:24:17.808146145 +0000 UTC m=+0.054728058 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:24:17 np0005466030 podman[316868]: 2025-10-02 13:24:17.826259523 +0000 UTC m=+0.077415869 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:24:19 np0005466030 nova_compute[230518]: 2025-10-02 13:24:19.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Oct  2 09:24:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:19.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Oct  2 09:24:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:21.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:22 np0005466030 nova_compute[230518]: 2025-10-02 13:24:22.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:23.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:24 np0005466030 nova_compute[230518]: 2025-10-02 13:24:24.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:25.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:25.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:24:25.982 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:24:25.983 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:24:25.983 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Oct  2 09:24:27 np0005466030 nova_compute[230518]: 2025-10-02 13:24:27.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:27.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:24:29.068 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:24:29 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:24:29.069 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:24:29 np0005466030 nova_compute[230518]: 2025-10-02 13:24:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:29 np0005466030 nova_compute[230518]: 2025-10-02 13:24:29.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:30 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:24:30.071 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:24:30 np0005466030 podman[316913]: 2025-10-02 13:24:30.794159928 +0000 UTC m=+0.052051533 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct  2 09:24:30 np0005466030 podman[316914]: 2025-10-02 13:24:30.7999447 +0000 UTC m=+0.053759248 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:24:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:31.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:32 np0005466030 nova_compute[230518]: 2025-10-02 13:24:32.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:33.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:24:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:24:34 np0005466030 nova_compute[230518]: 2025-10-02 13:24:34.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:35.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:35.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.099 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.100 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.100 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.100 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.101 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:24:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3425334544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.543 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:37.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.711 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.713 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4172MB free_disk=20.94255828857422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.713 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.714 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.857 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.858 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:24:37 np0005466030 nova_compute[230518]: 2025-10-02 13:24:37.873 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:24:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:24:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/25163837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:24:38 np0005466030 nova_compute[230518]: 2025-10-02 13:24:38.336 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:24:38 np0005466030 nova_compute[230518]: 2025-10-02 13:24:38.343 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:24:38 np0005466030 nova_compute[230518]: 2025-10-02 13:24:38.359 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:24:38 np0005466030 nova_compute[230518]: 2025-10-02 13:24:38.361 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:24:38 np0005466030 nova_compute[230518]: 2025-10-02 13:24:38.361 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:24:39 np0005466030 nova_compute[230518]: 2025-10-02 13:24:39.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:39.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:40 np0005466030 nova_compute[230518]: 2025-10-02 13:24:40.363 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:40 np0005466030 nova_compute[230518]: 2025-10-02 13:24:40.363 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:24:41 np0005466030 nova_compute[230518]: 2025-10-02 13:24:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:41.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:42 np0005466030 nova_compute[230518]: 2025-10-02 13:24:42.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:43.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:44 np0005466030 nova_compute[230518]: 2025-10-02 13:24:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:44 np0005466030 nova_compute[230518]: 2025-10-02 13:24:44.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:46 np0005466030 nova_compute[230518]: 2025-10-02 13:24:46.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:47 np0005466030 nova_compute[230518]: 2025-10-02 13:24:47.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:47.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:47.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:48 np0005466030 nova_compute[230518]: 2025-10-02 13:24:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:48 np0005466030 podman[316999]: 2025-10-02 13:24:48.789951822 +0000 UTC m=+0.043968030 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  2 09:24:48 np0005466030 podman[316998]: 2025-10-02 13:24:48.812926723 +0000 UTC m=+0.071236486 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 09:24:49 np0005466030 nova_compute[230518]: 2025-10-02 13:24:49.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:49 np0005466030 nova_compute[230518]: 2025-10-02 13:24:49.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:49.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:49.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:51.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:51.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:52 np0005466030 nova_compute[230518]: 2025-10-02 13:24:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:52 np0005466030 nova_compute[230518]: 2025-10-02 13:24:52.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:52 np0005466030 nova_compute[230518]: 2025-10-02 13:24:52.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:24:52 np0005466030 nova_compute[230518]: 2025-10-02 13:24:52.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:24:52 np0005466030 nova_compute[230518]: 2025-10-02 13:24:52.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:24:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:53.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:54 np0005466030 nova_compute[230518]: 2025-10-02 13:24:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:54 np0005466030 nova_compute[230518]: 2025-10-02 13:24:54.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:24:54 np0005466030 nova_compute[230518]: 2025-10-02 13:24:54.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:24:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:24:57 np0005466030 nova_compute[230518]: 2025-10-02 13:24:57.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:57.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:59 np0005466030 nova_compute[230518]: 2025-10-02 13:24:59.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:24:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:59.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:24:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:24:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:24:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:24:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:01 np0005466030 podman[317174]: 2025-10-02 13:25:01.3001861 +0000 UTC m=+0.057397531 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:25:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:01.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:01 np0005466030 podman[317173]: 2025-10-02 13:25:01.333422702 +0000 UTC m=+0.089628432 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:25:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:01.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:02 np0005466030 nova_compute[230518]: 2025-10-02 13:25:02.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:25:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:25:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:25:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:03.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:04 np0005466030 nova_compute[230518]: 2025-10-02 13:25:04.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:25:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:25:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:25:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:25:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:05.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:07 np0005466030 nova_compute[230518]: 2025-10-02 13:25:07.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:09 np0005466030 nova_compute[230518]: 2025-10-02 13:25:09.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:25:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:25:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:11.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:12 np0005466030 nova_compute[230518]: 2025-10-02 13:25:12.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Oct  2 09:25:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:13.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:14 np0005466030 nova_compute[230518]: 2025-10-02 13:25:14.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Oct  2 09:25:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Oct  2 09:25:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:17 np0005466030 nova_compute[230518]: 2025-10-02 13:25:17.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:17.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Oct  2 09:25:19 np0005466030 nova_compute[230518]: 2025-10-02 13:25:19.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:19.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:19.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:19 np0005466030 podman[317264]: 2025-10-02 13:25:19.799337034 +0000 UTC m=+0.051769965 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct  2 09:25:19 np0005466030 podman[317263]: 2025-10-02 13:25:19.826383183 +0000 UTC m=+0.083261443 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  2 09:25:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:25:20.077 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:25:20 np0005466030 nova_compute[230518]: 2025-10-02 13:25:20.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:20 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:25:20.078 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:25:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:21.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Oct  2 09:25:22 np0005466030 nova_compute[230518]: 2025-10-02 13:25:22.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:23.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:23.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:24 np0005466030 nova_compute[230518]: 2025-10-02 13:25:24.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:25.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:25:25.984 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:25:25.985 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:25:25.985 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Oct  2 09:25:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Oct  2 09:25:27 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:25:27.079 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:25:27 np0005466030 nova_compute[230518]: 2025-10-02 13:25:27.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:27.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:27.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:29 np0005466030 nova_compute[230518]: 2025-10-02 13:25:29.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:31 np0005466030 podman[317309]: 2025-10-02 13:25:31.803311163 +0000 UTC m=+0.056726590 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:25:31 np0005466030 podman[317310]: 2025-10-02 13:25:31.812231983 +0000 UTC m=+0.062778250 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:25:32 np0005466030 nova_compute[230518]: 2025-10-02 13:25:32.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:34 np0005466030 nova_compute[230518]: 2025-10-02 13:25:34.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:35.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:35.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 e419: 3 total, 3 up, 3 in
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.933218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536933264, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 254, "total_data_size": 2504437, "memory_usage": 2548280, "flush_reason": "Manual Compaction"}
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536941418, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1085387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81692, "largest_seqno": 82919, "table_properties": {"data_size": 1080869, "index_size": 2041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11767, "raw_average_key_size": 21, "raw_value_size": 1071177, "raw_average_value_size": 1933, "num_data_blocks": 90, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411451, "oldest_key_time": 1759411451, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 8253 microseconds, and 4995 cpu microseconds.
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.941468) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1085387 bytes OK
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.941486) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.942926) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.942942) EVENT_LOG_v1 {"time_micros": 1759411536942938, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.942960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2498487, prev total WAL file size 2498487, number of live WAL files 2.
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.943801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303039' seq:0, type:0; will stop at (end)
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1059KB)], [168(13MB)]
Oct  2 09:25:36 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536943848, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14754396, "oldest_snapshot_seqno": -1}
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10231 keys, 11465759 bytes, temperature: kUnknown
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537016579, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 11465759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11402093, "index_size": 36993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270469, "raw_average_key_size": 26, "raw_value_size": 11225547, "raw_average_value_size": 1097, "num_data_blocks": 1398, "num_entries": 10231, "num_filter_entries": 10231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.016865) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11465759 bytes
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.019162) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.6 rd, 157.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(24.2) write-amplify(10.6) OK, records in: 10717, records dropped: 486 output_compression: NoCompression
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.019178) EVENT_LOG_v1 {"time_micros": 1759411537019171, "job": 108, "event": "compaction_finished", "compaction_time_micros": 72817, "compaction_time_cpu_micros": 27631, "output_level": 6, "num_output_files": 1, "total_output_size": 11465759, "num_input_records": 10717, "num_output_records": 10231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537019472, "job": 108, "event": "table_file_deletion", "file_number": 170}
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537021827, "job": 108, "event": "table_file_deletion", "file_number": 168}
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.943708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:25:37 np0005466030 nova_compute[230518]: 2025-10-02 13:25:37.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:37.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:37.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.077 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.077 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:25:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:39.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:25:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:25:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/881232608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.551 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:39.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:25:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 69K writes, 270K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 69K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2949 writes, 9921 keys, 2949 commit groups, 1.0 writes per commit group, ingest: 9.42 MB, 0.02 MB/s#012Interval WAL: 2949 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.749 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.751 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4216MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.751 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.751 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.851 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:25:39 np0005466030 nova_compute[230518]: 2025-10-02 13:25:39.868 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:25:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:25:40 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2823375801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:25:40 np0005466030 nova_compute[230518]: 2025-10-02 13:25:40.315 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:25:40 np0005466030 nova_compute[230518]: 2025-10-02 13:25:40.322 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:25:40 np0005466030 nova_compute[230518]: 2025-10-02 13:25:40.340 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:25:40 np0005466030 nova_compute[230518]: 2025-10-02 13:25:40.342 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:25:40 np0005466030 nova_compute[230518]: 2025-10-02 13:25:40.342 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:25:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:41.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:42 np0005466030 nova_compute[230518]: 2025-10-02 13:25:42.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:42 np0005466030 nova_compute[230518]: 2025-10-02 13:25:42.343 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:42 np0005466030 nova_compute[230518]: 2025-10-02 13:25:42.344 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:25:43 np0005466030 nova_compute[230518]: 2025-10-02 13:25:43.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:43.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:44 np0005466030 nova_compute[230518]: 2025-10-02 13:25:44.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:45 np0005466030 nova_compute[230518]: 2025-10-02 13:25:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:45.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:45.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:47 np0005466030 nova_compute[230518]: 2025-10-02 13:25:47.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:47.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:47.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:48 np0005466030 nova_compute[230518]: 2025-10-02 13:25:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:49 np0005466030 nova_compute[230518]: 2025-10-02 13:25:49.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:49.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:50 np0005466030 nova_compute[230518]: 2025-10-02 13:25:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:50 np0005466030 podman[317394]: 2025-10-02 13:25:50.797071861 +0000 UTC m=+0.046450618 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:25:50 np0005466030 podman[317393]: 2025-10-02 13:25:50.825114331 +0000 UTC m=+0.077558154 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  2 09:25:51 np0005466030 nova_compute[230518]: 2025-10-02 13:25:51.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:51.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:51.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:52 np0005466030 nova_compute[230518]: 2025-10-02 13:25:52.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:53 np0005466030 nova_compute[230518]: 2025-10-02 13:25:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:53 np0005466030 nova_compute[230518]: 2025-10-02 13:25:53.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:25:53 np0005466030 nova_compute[230518]: 2025-10-02 13:25:53.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:25:53 np0005466030 nova_compute[230518]: 2025-10-02 13:25:53.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:25:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:53.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:53.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:54 np0005466030 nova_compute[230518]: 2025-10-02 13:25:54.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:55.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:56 np0005466030 nova_compute[230518]: 2025-10-02 13:25:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:25:57 np0005466030 nova_compute[230518]: 2025-10-02 13:25:57.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:57.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:25:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:25:59 np0005466030 nova_compute[230518]: 2025-10-02 13:25:59.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:25:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:25:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:25:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:25:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:25:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:00.368 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:26:00 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:00.369 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:26:00 np0005466030 nova_compute[230518]: 2025-10-02 13:26:00.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:01 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:01.372 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:26:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:01.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:01.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:02 np0005466030 nova_compute[230518]: 2025-10-02 13:26:02.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:02 np0005466030 podman[317437]: 2025-10-02 13:26:02.797312534 +0000 UTC m=+0.054501590 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:26:02 np0005466030 podman[317438]: 2025-10-02 13:26:02.803972613 +0000 UTC m=+0.052600521 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:26:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:03.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.072 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.079 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.080 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.080 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.080 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Removable base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.081 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.081 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.081 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.081 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct  2 09:26:04 np0005466030 nova_compute[230518]: 2025-10-02 13:26:04.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:05.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:07 np0005466030 nova_compute[230518]: 2025-10-02 13:26:07.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:07.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:07.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:09 np0005466030 nova_compute[230518]: 2025-10-02 13:26:09.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:09.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:26:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 16K writes, 83K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1540 writes, 7953 keys, 1540 commit groups, 1.0 writes per commit group, ingest: 16.14 MB, 0.03 MB/s#012Interval WAL: 1540 writes, 1540 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     57.5      1.81              0.31        54    0.033       0      0       0.0       0.0#012  L6      1/0   10.93 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.3    119.7    102.4      5.37              1.70        53    0.101    401K    28K       0.0       0.0#012 Sum      1/0   10.93 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.3     89.6     91.1      7.18              2.01       107    0.067    401K    28K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2    117.0    116.3      0.85              0.31        14    0.060     73K   3633       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    119.7    102.4      5.37              1.70        53    0.101    401K    28K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     57.5      1.80              0.31        53    0.034       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.101, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.64 GB write, 0.11 MB/s write, 0.63 GB read, 0.11 MB/s read, 7.2 seconds#012Interval compaction: 0.10 GB write, 0.16 MB/s write, 0.10 GB read, 0.17 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 68.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000429 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3910,65.44 MB,21.5273%) FilterBlock(107,1.08 MB,0.3544%) IndexBlock(107,1.81 MB,0.595133%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:26:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:11.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:11.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:12 np0005466030 nova_compute[230518]: 2025-10-02 13:26:12.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:26:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:26:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:26:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:13.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:13.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:14 np0005466030 nova_compute[230518]: 2025-10-02 13:26:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:15.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:17 np0005466030 nova_compute[230518]: 2025-10-02 13:26:17.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:26:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:26:19 np0005466030 nova_compute[230518]: 2025-10-02 13:26:19.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:19.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:26:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119302741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:26:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:21 np0005466030 podman[317656]: 2025-10-02 13:26:21.805255207 +0000 UTC m=+0.054161790 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct  2 09:26:21 np0005466030 podman[317655]: 2025-10-02 13:26:21.824780149 +0000 UTC m=+0.076732658 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:26:22 np0005466030 nova_compute[230518]: 2025-10-02 13:26:22.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:23.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:24 np0005466030 nova_compute[230518]: 2025-10-02 13:26:24.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:25.986 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:25.986 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:25.986 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:27 np0005466030 nova_compute[230518]: 2025-10-02 13:26:27.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:27.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:29 np0005466030 nova_compute[230518]: 2025-10-02 13:26:29.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:31.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:32 np0005466030 nova_compute[230518]: 2025-10-02 13:26:32.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:33.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:33.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:33 np0005466030 podman[317698]: 2025-10-02 13:26:33.815172803 +0000 UTC m=+0.058364612 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:26:33 np0005466030 podman[317699]: 2025-10-02 13:26:33.840386344 +0000 UTC m=+0.085148572 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  2 09:26:34 np0005466030 nova_compute[230518]: 2025-10-02 13:26:34.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:35.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:35.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:37 np0005466030 nova_compute[230518]: 2025-10-02 13:26:37.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:39 np0005466030 nova_compute[230518]: 2025-10-02 13:26:39.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.082 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.114 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.115 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.115 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.115 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.116 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:26:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3229219732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.574 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.757 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.758 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4218MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.759 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.759 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:26:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.855 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.856 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.875 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.946 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.947 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.965 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:26:41 np0005466030 nova_compute[230518]: 2025-10-02 13:26:41.985 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:26:42 np0005466030 nova_compute[230518]: 2025-10-02 13:26:42.009 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:26:42 np0005466030 nova_compute[230518]: 2025-10-02 13:26:42.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:26:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/298458676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:26:42 np0005466030 nova_compute[230518]: 2025-10-02 13:26:42.576 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:26:42 np0005466030 nova_compute[230518]: 2025-10-02 13:26:42.581 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:26:42 np0005466030 nova_compute[230518]: 2025-10-02 13:26:42.603 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:26:42 np0005466030 nova_compute[230518]: 2025-10-02 13:26:42.605 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:26:42 np0005466030 nova_compute[230518]: 2025-10-02 13:26:42.606 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:26:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:43.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:44 np0005466030 nova_compute[230518]: 2025-10-02 13:26:44.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:44 np0005466030 nova_compute[230518]: 2025-10-02 13:26:44.576 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:45.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:46 np0005466030 nova_compute[230518]: 2025-10-02 13:26:46.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:47 np0005466030 nova_compute[230518]: 2025-10-02 13:26:47.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:47.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:47.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:49 np0005466030 nova_compute[230518]: 2025-10-02 13:26:49.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:49 np0005466030 nova_compute[230518]: 2025-10-02 13:26:49.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:49.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:49.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 e420: 3 total, 3 up, 3 in
Oct  2 09:26:51 np0005466030 nova_compute[230518]: 2025-10-02 13:26:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:51.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:51.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:52 np0005466030 nova_compute[230518]: 2025-10-02 13:26:52.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:52 np0005466030 nova_compute[230518]: 2025-10-02 13:26:52.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:52 np0005466030 podman[317784]: 2025-10-02 13:26:52.819525351 +0000 UTC m=+0.059533149 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  2 09:26:52 np0005466030 podman[317783]: 2025-10-02 13:26:52.869057095 +0000 UTC m=+0.120544173 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:26:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:54 np0005466030 nova_compute[230518]: 2025-10-02 13:26:54.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:54 np0005466030 nova_compute[230518]: 2025-10-02 13:26:54.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:26:54 np0005466030 nova_compute[230518]: 2025-10-02 13:26:54.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:26:54 np0005466030 nova_compute[230518]: 2025-10-02 13:26:54.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:54 np0005466030 nova_compute[230518]: 2025-10-02 13:26:54.779 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:26:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:56 np0005466030 nova_compute[230518]: 2025-10-02 13:26:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:57 np0005466030 nova_compute[230518]: 2025-10-02 13:26:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:57 np0005466030 nova_compute[230518]: 2025-10-02 13:26:57.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:26:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:57.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:26:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:58 np0005466030 nova_compute[230518]: 2025-10-02 13:26:58.250 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:58 np0005466030 nova_compute[230518]: 2025-10-02 13:26:58.699 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:26:58 np0005466030 nova_compute[230518]: 2025-10-02 13:26:58.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:58.877 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:26:58 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:26:58.879 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:26:59 np0005466030 nova_compute[230518]: 2025-10-02 13:26:59.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:26:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:26:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:59.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:26:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:26:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:26:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:59.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:01 np0005466030 nova_compute[230518]: 2025-10-02 13:27:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:01 np0005466030 nova_compute[230518]: 2025-10-02 13:27:01.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:27:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:01.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:02 np0005466030 nova_compute[230518]: 2025-10-02 13:27:02.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:27:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3595277946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:27:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:03.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:27:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:27:04 np0005466030 nova_compute[230518]: 2025-10-02 13:27:04.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:04 np0005466030 podman[317830]: 2025-10-02 13:27:04.804537657 +0000 UTC m=+0.053891512 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Oct  2 09:27:04 np0005466030 podman[317829]: 2025-10-02 13:27:04.826083502 +0000 UTC m=+0.080564668 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  2 09:27:04 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:27:04.882 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:27:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:27:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:27:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:27:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:05.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:07 np0005466030 nova_compute[230518]: 2025-10-02 13:27:07.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:07.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:07.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:09 np0005466030 nova_compute[230518]: 2025-10-02 13:27:09.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:09.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:09.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:11.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:11.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e421 e421: 3 total, 3 up, 3 in
Oct  2 09:27:12 np0005466030 nova_compute[230518]: 2025-10-02 13:27:12.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:13.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:13.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:14 np0005466030 nova_compute[230518]: 2025-10-02 13:27:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:15.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:15.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:17 np0005466030 nova_compute[230518]: 2025-10-02 13:27:17.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:17.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:19 np0005466030 nova_compute[230518]: 2025-10-02 13:27:19.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:19.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:19.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:27:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:21 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:27:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:21.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:21.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 e422: 3 total, 3 up, 3 in
Oct  2 09:27:22 np0005466030 nova_compute[230518]: 2025-10-02 13:27:22.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:23.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:23 np0005466030 podman[318002]: 2025-10-02 13:27:23.812130006 +0000 UTC m=+0.054007376 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  2 09:27:23 np0005466030 podman[318001]: 2025-10-02 13:27:23.830152921 +0000 UTC m=+0.080424894 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller)
Oct  2 09:27:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:23.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:24 np0005466030 nova_compute[230518]: 2025-10-02 13:27:24.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:25.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:25.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:27:25.987 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:27:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:27:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:27 np0005466030 nova_compute[230518]: 2025-10-02 13:27:27.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:27.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:27.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:29 np0005466030 nova_compute[230518]: 2025-10-02 13:27:29.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:29.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:31 np0005466030 nova_compute[230518]: 2025-10-02 13:27:31.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:31 np0005466030 nova_compute[230518]: 2025-10-02 13:27:31.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:27:31 np0005466030 nova_compute[230518]: 2025-10-02 13:27:31.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.179371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651179410, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1425, "num_deletes": 252, "total_data_size": 3130552, "memory_usage": 3173488, "flush_reason": "Manual Compaction"}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651191112, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 2055343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82924, "largest_seqno": 84344, "table_properties": {"data_size": 2049288, "index_size": 3321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13355, "raw_average_key_size": 20, "raw_value_size": 2036904, "raw_average_value_size": 3086, "num_data_blocks": 146, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411537, "oldest_key_time": 1759411537, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 11843 microseconds, and 5501 cpu microseconds.
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.191155) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 2055343 bytes OK
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.191232) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.192888) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.192902) EVENT_LOG_v1 {"time_micros": 1759411651192898, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.192921) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 3123841, prev total WAL file size 3123841, number of live WAL files 2.
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.193740) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(2007KB)], [171(10MB)]
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651193816, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 13521102, "oldest_snapshot_seqno": -1}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10368 keys, 11507754 bytes, temperature: kUnknown
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651243454, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 11507754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11443201, "index_size": 37560, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 274052, "raw_average_key_size": 26, "raw_value_size": 11264207, "raw_average_value_size": 1086, "num_data_blocks": 1416, "num_entries": 10368, "num_filter_entries": 10368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.243706) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11507754 bytes
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.244989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 272.5 rd, 232.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.9 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(12.2) write-amplify(5.6) OK, records in: 10891, records dropped: 523 output_compression: NoCompression
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.245004) EVENT_LOG_v1 {"time_micros": 1759411651244997, "job": 110, "event": "compaction_finished", "compaction_time_micros": 49611, "compaction_time_cpu_micros": 28144, "output_level": 6, "num_output_files": 1, "total_output_size": 11507754, "num_input_records": 10891, "num_output_records": 10368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651245724, "job": 110, "event": "table_file_deletion", "file_number": 173}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651247892, "job": 110, "event": "table_file_deletion", "file_number": 171}
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.193637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.247994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.247998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.248000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.248001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.248002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:27:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:32 np0005466030 nova_compute[230518]: 2025-10-02 13:27:32.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:32 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:32 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:27:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:34 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:27:34 np0005466030 nova_compute[230518]: 2025-10-02 13:27:34.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:35.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:35 np0005466030 podman[318097]: 2025-10-02 13:27:35.795048916 +0000 UTC m=+0.049947648 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  2 09:27:35 np0005466030 podman[318096]: 2025-10-02 13:27:35.819038088 +0000 UTC m=+0.075282982 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:27:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:35.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:37 np0005466030 nova_compute[230518]: 2025-10-02 13:27:37.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:37.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:37.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:39 np0005466030 nova_compute[230518]: 2025-10-02 13:27:39.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:39.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:27:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:41.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.100 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:27:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220820876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:27:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3497635110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.555 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.723 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4218MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.724 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.724 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.789 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.789 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:27:42 np0005466030 nova_compute[230518]: 2025-10-02 13:27:42.812 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:27:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:27:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/213874345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:27:43 np0005466030 nova_compute[230518]: 2025-10-02 13:27:43.263 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:27:43 np0005466030 nova_compute[230518]: 2025-10-02 13:27:43.268 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:27:43 np0005466030 nova_compute[230518]: 2025-10-02 13:27:43.282 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:27:43 np0005466030 nova_compute[230518]: 2025-10-02 13:27:43.286 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:27:43 np0005466030 nova_compute[230518]: 2025-10-02 13:27:43.286 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:27:43 np0005466030 nova_compute[230518]: 2025-10-02 13:27:43.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:27:43.322 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:27:43 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:27:43.322 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:27:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:43.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:43.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:44 np0005466030 nova_compute[230518]: 2025-10-02 13:27:44.266 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:44 np0005466030 nova_compute[230518]: 2025-10-02 13:27:44.266 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:44 np0005466030 nova_compute[230518]: 2025-10-02 13:27:44.266 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:27:44 np0005466030 nova_compute[230518]: 2025-10-02 13:27:44.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:45.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:47 np0005466030 nova_compute[230518]: 2025-10-02 13:27:47.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:47 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:27:47.324 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:27:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:27:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:47.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:27:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:47.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:48 np0005466030 nova_compute[230518]: 2025-10-02 13:27:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:49 np0005466030 nova_compute[230518]: 2025-10-02 13:27:49.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:49.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:51 np0005466030 nova_compute[230518]: 2025-10-02 13:27:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:51.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:27:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:27:52 np0005466030 nova_compute[230518]: 2025-10-02 13:27:52.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:52 np0005466030 nova_compute[230518]: 2025-10-02 13:27:52.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:53 np0005466030 nova_compute[230518]: 2025-10-02 13:27:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:53.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:53.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:54 np0005466030 nova_compute[230518]: 2025-10-02 13:27:54.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:54 np0005466030 podman[318230]: 2025-10-02 13:27:54.837073738 +0000 UTC m=+0.081251460 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:27:54 np0005466030 podman[318229]: 2025-10-02 13:27:54.846647669 +0000 UTC m=+0.095495667 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct  2 09:27:55 np0005466030 nova_compute[230518]: 2025-10-02 13:27:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:55 np0005466030 nova_compute[230518]: 2025-10-02 13:27:55.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:27:55 np0005466030 nova_compute[230518]: 2025-10-02 13:27:55.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:27:55 np0005466030 nova_compute[230518]: 2025-10-02 13:27:55.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:27:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:55.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:55.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:57 np0005466030 nova_compute[230518]: 2025-10-02 13:27:57.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:27:57 np0005466030 nova_compute[230518]: 2025-10-02 13:27:57.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:57.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:57.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:59 np0005466030 nova_compute[230518]: 2025-10-02 13:27:59.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:27:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:27:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:59.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:27:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:27:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:27:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:59.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:01.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:01.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:02 np0005466030 nova_compute[230518]: 2025-10-02 13:28:02.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:03.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:04 np0005466030 nova_compute[230518]: 2025-10-02 13:28:04.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:28:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:28:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:28:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:28:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:05.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:06 np0005466030 podman[318274]: 2025-10-02 13:28:06.82646455 +0000 UTC m=+0.072648770 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:28:06 np0005466030 podman[318273]: 2025-10-02 13:28:06.833590713 +0000 UTC m=+0.086967578 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:28:07 np0005466030 nova_compute[230518]: 2025-10-02 13:28:07.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:07.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:07.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:09 np0005466030 nova_compute[230518]: 2025-10-02 13:28:09.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:09.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:09.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:11.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:11.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:12 np0005466030 nova_compute[230518]: 2025-10-02 13:28:12.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:13.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:13.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:14 np0005466030 nova_compute[230518]: 2025-10-02 13:28:14.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 e423: 3 total, 3 up, 3 in
Oct  2 09:28:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:15.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:17 np0005466030 nova_compute[230518]: 2025-10-02 13:28:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:17.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:28:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:28:19 np0005466030 nova_compute[230518]: 2025-10-02 13:28:19.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:19.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:19.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:21.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:22 np0005466030 nova_compute[230518]: 2025-10-02 13:28:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.104 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.105 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.119 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.205 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.206 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.213 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.214 2 INFO nova.compute.claims [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.294 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:28:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1513027977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.710 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.717 2 DEBUG nova.compute.provider_tree [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.740 2 DEBUG nova.scheduler.client.report [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.783 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.784 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.830 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.833 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.833 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.854 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:28:23 np0005466030 nova_compute[230518]: 2025-10-02 13:28:23.913 2 INFO nova.virt.block_device [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Booting with volume snapshot b50be4d5-612a-4434-8eb6-55d27bed7a4d at /dev/vda#033[00m
Oct  2 09:28:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:24 np0005466030 nova_compute[230518]: 2025-10-02 13:28:24.075 2 DEBUG nova.policy [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:28:24 np0005466030 nova_compute[230518]: 2025-10-02 13:28:24.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.000 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Successfully created port: 650cec0d-3a37-4324-87bb-85f638f2c4fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:28:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.670 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Successfully updated port: 650cec0d-3a37-4324-87bb-85f638f2c4fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.697 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.698 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.698 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.790 2 DEBUG nova.compute.manager [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.791 2 DEBUG nova.compute.manager [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing instance network info cache due to event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:28:25 np0005466030 nova_compute[230518]: 2025-10-02 13:28:25.791 2 DEBUG oslo_concurrency.lockutils [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:28:25 np0005466030 podman[318334]: 2025-10-02 13:28:25.805671524 +0000 UTC m=+0.057820245 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:28:25 np0005466030 podman[318333]: 2025-10-02 13:28:25.82852117 +0000 UTC m=+0.083848231 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 09:28:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:25.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:25.989 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:26 np0005466030 nova_compute[230518]: 2025-10-02 13:28:26.062 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:28:27 np0005466030 nova_compute[230518]: 2025-10-02 13:28:27.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:27.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.176 2 DEBUG os_brick.utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.178 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.193 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.194 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3034c3-6b7b-40ed-a4d5-3feae6696d0a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.195 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.207 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.207 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[018ea3d8-8f72-411c-9439-9ff5be3c9d0b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.209 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.221 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.222 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5c748a54-011a-4816-9c10-2a680dc79a4f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.224 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8bb748-0142-43d7-bf89-cea4604d87da]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.224 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.266 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.268 2 DEBUG os_brick.initiator.connectors.lightos [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG os_brick.initiator.connectors.lightos [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG os_brick.initiator.connectors.lightos [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG os_brick.utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:28:28 np0005466030 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG nova.virt.block_device [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating existing volume attachment record: 4f9bf0c2-6ea0-491d-b924-500ff3decac0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.036 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:28:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:28:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295047483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.075 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.076 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance network_info: |[{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.076 2 DEBUG oslo_concurrency.lockutils [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.076 2 DEBUG nova.network.neutron [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.378 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.380 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.381 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Creating image(s)#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.381 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.381 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Ensure instance console log exists: /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.382 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.382 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.382 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.384 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start _get_guest_xml network_info=[{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-02T13:28:13Z,direct_url=<?>,disk_format='qcow2',id=e0399c4c-8352-497f-b361-45e672712e68,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-566470057',owner='18799a1c93354809911705bb424e673f',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-02T13:28:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31', 'attached_at': '', 'detached_at': '', 'volume_id': '4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8', 'serial': '4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8'}, 'boot_index': 0, 'attachment_id': '4f9bf0c2-6ea0-491d-b924-500ff3decac0', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.390 2 WARNING nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.394 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.395 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.397 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.398 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.399 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.400 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-02T13:28:13Z,direct_url=<?>,disk_format='qcow2',id=e0399c4c-8352-497f-b361-45e672712e68,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-566470057',owner='18799a1c93354809911705bb424e673f',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-02T13:28:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.400 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.400 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.439 2 DEBUG nova.storage.rbd_utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.443 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:29.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:28:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/521275966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.864 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.888 2 DEBUG nova.virt.libvirt.vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-426050554',id=221,image_ref='e0399c4c-8352-497f-b361-45e672712e68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLILQTOU+IPOd8w0GqrVsN/QbgbwiFJSPjI2JTUbPYQ/ozNt878L7gRDoBhnJsqCVc05+BAE6CcyuObEWMpQFhUuKt2iunDqIotZaYJTz1b891j8Z4tJFAz/OrIq++6nPw==',key_name='tempest-keypair-176326542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-hck380nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1344814684',image_owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:28:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=4fe12372-ed4b-40ab-9cf2-dcf304f21c31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.889 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.890 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.891 2 DEBUG nova.objects.instance [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.913 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <uuid>4fe12372-ed4b-40ab-9cf2-dcf304f21c31</uuid>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <name>instance-000000dd</name>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-426050554</nova:name>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:28:29</nova:creationTime>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <nova:root type="image" uuid="e0399c4c-8352-497f-b361-45e672712e68"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <nova:port uuid="650cec0d-3a37-4324-87bb-85f638f2c4fd">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <entry name="serial">4fe12372-ed4b-40ab-9cf2-dcf304f21c31</entry>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <entry name="uuid">4fe12372-ed4b-40ab-9cf2-dcf304f21c31</entry>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <serial>4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8</serial>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:65:62:3b"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <target dev="tap650cec0d-3a"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/console.log" append="off"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <input type="keyboard" bus="usb"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:28:29 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:28:29 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:28:29 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:28:29 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.914 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Preparing to wait for external event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.914 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.915 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.915 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.916 2 DEBUG nova.virt.libvirt.vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-426050554',id=221,image_ref='e0399c4c-8352-497f-b361-45e672712e68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLILQTOU+IPOd8w0GqrVsN/QbgbwiFJSPjI2JTUbPYQ/ozNt878L7gRDoBhnJsqCVc05+BAE6CcyuObEWMpQFhUuKt2iunDqIotZaYJTz1b891j8Z4tJFAz/OrIq++6nPw==',key_name='tempest-keypair-176326542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-hck380nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1344814684',image_owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:28:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=4fe12372-ed4b-40ab-9cf2-dcf304f21c31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.916 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.917 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.917 2 DEBUG os_vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap650cec0d-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap650cec0d-3a, col_values=(('external_ids', {'iface-id': '650cec0d-3a37-4324-87bb-85f638f2c4fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:62:3b', 'vm-uuid': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:29 np0005466030 NetworkManager[44960]: <info>  [1759411709.9238] manager: (tap650cec0d-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:29.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:29 np0005466030 nova_compute[230518]: 2025-10-02 13:28:29.931 2 INFO os_vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a')#033[00m
Oct  2 09:28:30 np0005466030 nova_compute[230518]: 2025-10-02 13:28:30.266 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:28:30 np0005466030 nova_compute[230518]: 2025-10-02 13:28:30.267 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:28:30 np0005466030 nova_compute[230518]: 2025-10-02 13:28:30.267 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:65:62:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:28:30 np0005466030 nova_compute[230518]: 2025-10-02 13:28:30.267 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Using config drive#033[00m
Oct  2 09:28:30 np0005466030 nova_compute[230518]: 2025-10-02 13:28:30.294 2 DEBUG nova.storage.rbd_utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.188 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Creating config drive at /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.195 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgsnrxkz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.334 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgsnrxkz" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.370 2 DEBUG nova.storage.rbd_utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.373 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.539 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.540 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deleting local config drive /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config because it was imported into RBD.#033[00m
Oct  2 09:28:31 np0005466030 kernel: tap650cec0d-3a: entered promiscuous mode
Oct  2 09:28:31 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:31Z|00886|binding|INFO|Claiming lport 650cec0d-3a37-4324-87bb-85f638f2c4fd for this chassis.
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.5908] manager: (tap650cec0d-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:31Z|00887|binding|INFO|650cec0d-3a37-4324-87bb-85f638f2c4fd: Claiming fa:16:3e:65:62:3b 10.100.0.10
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.6048] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.6057] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.608 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:62:3b 10.100.0.10'], port_security=['fa:16:3e:65:62:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae0586a1-cfba-4e03-bfd2-a14f300878bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=650cec0d-3a37-4324-87bb-85f638f2c4fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.609 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 650cec0d-3a37-4324-87bb-85f638f2c4fd in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.611 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5#033[00m
Oct  2 09:28:31 np0005466030 systemd-udevd[318501]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.621 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19b346d0-d0e2-4c26-a0f0-06967b9068e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.622 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:28:31 np0005466030 systemd-machined[188247]: New machine qemu-100-instance-000000dd.
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.625 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.625 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e85f8c0-3599-4c6f-a461-65247e0447ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.626 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[96bb8bc9-d9a8-46f6-81d3-ea5476d2c977]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.637 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[eed41117-9de7-4e14-9721-52c1461581f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.6430] device (tap650cec0d-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.6455] device (tap650cec0d-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:28:31 np0005466030 systemd[1]: Started Virtual Machine qemu-100-instance-000000dd.
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.665 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76231396-6db0-40c6-a63f-64d16ac4bcd7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:31.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.719 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b4dfdc-6eb1-4615-9522-b9ded794efe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b73398f-be50-44bc-91a7-e24b864d7e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.7294] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/409)
Oct  2 09:28:31 np0005466030 systemd-udevd[318505]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:31Z|00888|binding|INFO|Setting lport 650cec0d-3a37-4324-87bb-85f638f2c4fd ovn-installed in OVS
Oct  2 09:28:31 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:31Z|00889|binding|INFO|Setting lport 650cec0d-3a37-4324-87bb-85f638f2c4fd up in Southbound
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.762 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c06c31ca-791d-4e0d-8891-99b92f28e625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.765 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e82a4f27-3c54-4251-beea-4321ac38bbf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.7835] device (tap858f2b6f-80): carrier: link connected
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.787 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2923f6-8d58-4eda-9f3c-cc516c00f828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b55b75ff-2234-4cba-9218-938bc06ae8ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957534, 'reachable_time': 31910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318533, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.816 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fde3a7-d55e-4bcb-8af8-ce21b55e34e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 957534, 'tstamp': 957534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318534, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d25f889f-e49e-4fe3-a7ca-af2facc2223c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957534, 'reachable_time': 31910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318535, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.858 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[84a51b6c-1975-4880-83d8-af4033149128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.909 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a45279-1b22-4995-bcb7-f790ed175b44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.910 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.911 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.911 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 NetworkManager[44960]: <info>  [1759411711.9136] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct  2 09:28:31 np0005466030 kernel: tap858f2b6f-80: entered promiscuous mode
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.916 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:31Z|00890|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct  2 09:28:31 np0005466030 nova_compute[230518]: 2025-10-02 13:28:31.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:31.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.933 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.934 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ef7d76-877a-4dcc-a7ba-9f1b004ce7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.935 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:28:31 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.936 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.046 2 DEBUG nova.network.neutron [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updated VIF entry in instance network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.047 2 DEBUG nova.network.neutron [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.064 2 DEBUG oslo_concurrency.lockutils [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.241 2 DEBUG nova.compute.manager [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG oslo_concurrency.lockutils [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG oslo_concurrency.lockutils [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG oslo_concurrency.lockutils [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG nova.compute.manager [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Processing event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:32 np0005466030 podman[318601]: 2025-10-02 13:28:32.295594204 +0000 UTC m=+0.048912796 container create b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:28:32 np0005466030 systemd[1]: Started libpod-conmon-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39.scope.
Oct  2 09:28:32 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:28:32 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7018b0b22f5f7d1f2d4dd3aed7c70f935e42ebe9c211ba7bf0b7ea0fc5157f9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:28:32 np0005466030 podman[318601]: 2025-10-02 13:28:32.26835003 +0000 UTC m=+0.021668642 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:28:32 np0005466030 podman[318601]: 2025-10-02 13:28:32.37454276 +0000 UTC m=+0.127861372 container init b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  2 09:28:32 np0005466030 podman[318601]: 2025-10-02 13:28:32.379732343 +0000 UTC m=+0.133050935 container start b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:28:32 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : New worker (318630) forked
Oct  2 09:28:32 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : Loading success.
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.768 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411712.767411, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.769 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Started (Lifecycle Event)#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.773 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.778 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.784 2 INFO nova.virt.libvirt.driver [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance spawned successfully.#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.785 2 INFO nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 3.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.786 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.800 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.803 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.826 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.827 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411712.7676258, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.827 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.848 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.851 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411712.7781487, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.851 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.859 2 INFO nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 9.69 seconds to build instance.#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.864 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.866 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:28:32 np0005466030 nova_compute[230518]: 2025-10-02 13:28:32.872 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:33.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:33.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:34 np0005466030 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG nova.compute.manager [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:28:34 np0005466030 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG oslo_concurrency.lockutils [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:34 np0005466030 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG oslo_concurrency.lockutils [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:34 np0005466030 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG oslo_concurrency.lockutils [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:34 np0005466030 nova_compute[230518]: 2025-10-02 13:28:34.457 2 DEBUG nova.compute.manager [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] No waiting events found dispatching network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:28:34 np0005466030 nova_compute[230518]: 2025-10-02 13:28:34.457 2 WARNING nova.compute.manager [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received unexpected event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd for instance with vm_state active and task_state None.#033[00m
Oct  2 09:28:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:34 np0005466030 nova_compute[230518]: 2025-10-02 13:28:34.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:35.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:35.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:36 np0005466030 nova_compute[230518]: 2025-10-02 13:28:36.696 2 DEBUG nova.compute.manager [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:28:36 np0005466030 nova_compute[230518]: 2025-10-02 13:28:36.698 2 DEBUG nova.compute.manager [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing instance network info cache due to event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:28:36 np0005466030 nova_compute[230518]: 2025-10-02 13:28:36.698 2 DEBUG oslo_concurrency.lockutils [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:28:36 np0005466030 nova_compute[230518]: 2025-10-02 13:28:36.699 2 DEBUG oslo_concurrency.lockutils [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:28:36 np0005466030 nova_compute[230518]: 2025-10-02 13:28:36.700 2 DEBUG nova.network.neutron [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:28:37 np0005466030 nova_compute[230518]: 2025-10-02 13:28:37.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:37.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:37 np0005466030 podman[318640]: 2025-10-02 13:28:37.818095788 +0000 UTC m=+0.060173058 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:28:37 np0005466030 podman[318639]: 2025-10-02 13:28:37.855535563 +0000 UTC m=+0.087496556 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:28:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:37.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:38 np0005466030 nova_compute[230518]: 2025-10-02 13:28:38.267 2 DEBUG nova.network.neutron [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updated VIF entry in instance network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:28:38 np0005466030 nova_compute[230518]: 2025-10-02 13:28:38.267 2 DEBUG nova.network.neutron [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:28:38 np0005466030 nova_compute[230518]: 2025-10-02 13:28:38.378 2 DEBUG oslo_concurrency.lockutils [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:28:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:39.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:39 np0005466030 nova_compute[230518]: 2025-10-02 13:28:39.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:39.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:42 np0005466030 nova_compute[230518]: 2025-10-02 13:28:42.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:28:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:28:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:28:43 np0005466030 nova_compute[230518]: 2025-10-02 13:28:43.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:43 np0005466030 nova_compute[230518]: 2025-10-02 13:28:43.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:28:43 np0005466030 nova_compute[230518]: 2025-10-02 13:28:43.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:44 np0005466030 nova_compute[230518]: 2025-10-02 13:28:44.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:44 np0005466030 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:44 np0005466030 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:44 np0005466030 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:28:44 np0005466030 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:44 np0005466030 nova_compute[230518]: 2025-10-02 13:28:44.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:28:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3495008382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.318 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.536 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.537 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4013MB free_disk=20.98794174194336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.538 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.538 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.654 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.655 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.655 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:28:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:45 np0005466030 nova_compute[230518]: 2025-10-02 13:28:45.743 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:28:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:28:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3216221222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:28:46 np0005466030 nova_compute[230518]: 2025-10-02 13:28:46.178 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:28:46 np0005466030 nova_compute[230518]: 2025-10-02 13:28:46.185 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:28:46 np0005466030 nova_compute[230518]: 2025-10-02 13:28:46.211 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:28:46 np0005466030 nova_compute[230518]: 2025-10-02 13:28:46.250 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:28:46 np0005466030 nova_compute[230518]: 2025-10-02 13:28:46.250 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:28:47 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:47Z|00126|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.10
Oct  2 09:28:47 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:47Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:65:62:3b 10.100.0.10
Oct  2 09:28:47 np0005466030 nova_compute[230518]: 2025-10-02 13:28:47.249 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:47 np0005466030 nova_compute[230518]: 2025-10-02 13:28:47.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:47.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:47.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:48 np0005466030 nova_compute[230518]: 2025-10-02 13:28:48.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:49.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:49 np0005466030 nova_compute[230518]: 2025-10-02 13:28:49.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:51 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:51Z|00128|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.10
Oct  2 09:28:51 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:51Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:65:62:3b 10.100.0.10
Oct  2 09:28:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:51.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:51.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:52Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:62:3b 10.100.0.10
Oct  2 09:28:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:28:52Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:62:3b 10.100.0.10
Oct  2 09:28:52 np0005466030 nova_compute[230518]: 2025-10-02 13:28:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:52 np0005466030 nova_compute[230518]: 2025-10-02 13:28:52.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:53 np0005466030 nova_compute[230518]: 2025-10-02 13:28:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:53.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:28:53 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:28:54 np0005466030 nova_compute[230518]: 2025-10-02 13:28:54.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:54 np0005466030 nova_compute[230518]: 2025-10-02 13:28:54.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:55.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:56 np0005466030 nova_compute[230518]: 2025-10-02 13:28:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:56 np0005466030 nova_compute[230518]: 2025-10-02 13:28:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:28:56 np0005466030 nova_compute[230518]: 2025-10-02 13:28:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:28:56 np0005466030 podman[318908]: 2025-10-02 13:28:56.822047367 +0000 UTC m=+0.055194133 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  2 09:28:56 np0005466030 podman[318907]: 2025-10-02 13:28:56.858547842 +0000 UTC m=+0.098743469 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  2 09:28:56 np0005466030 nova_compute[230518]: 2025-10-02 13:28:56.998 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:28:56 np0005466030 nova_compute[230518]: 2025-10-02 13:28:56.999 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:28:57 np0005466030 nova_compute[230518]: 2025-10-02 13:28:56.999 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:28:57 np0005466030 nova_compute[230518]: 2025-10-02 13:28:57.000 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:28:57 np0005466030 nova_compute[230518]: 2025-10-02 13:28:57.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:57.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:28:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:28:59 np0005466030 nova_compute[230518]: 2025-10-02 13:28:59.046 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:28:59 np0005466030 nova_compute[230518]: 2025-10-02 13:28:59.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:28:59 np0005466030 nova_compute[230518]: 2025-10-02 13:28:59.073 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:28:59 np0005466030 nova_compute[230518]: 2025-10-02 13:28:59.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:28:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:28:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:59.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:28:59 np0005466030 nova_compute[230518]: 2025-10-02 13:28:59.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:28:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:28:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:28:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:01 np0005466030 nova_compute[230518]: 2025-10-02 13:29:01.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:01.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:02 np0005466030 nova_compute[230518]: 2025-10-02 13:29:02.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:04 np0005466030 nova_compute[230518]: 2025-10-02 13:29:04.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:05.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:05.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:06 np0005466030 ovn_controller[129257]: 2025-10-02T13:29:06Z|00891|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct  2 09:29:07 np0005466030 nova_compute[230518]: 2025-10-02 13:29:07.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:07.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:08 np0005466030 podman[318954]: 2025-10-02 13:29:08.825933002 +0000 UTC m=+0.070926115 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:29:08 np0005466030 podman[318953]: 2025-10-02 13:29:08.825919633 +0000 UTC m=+0.068053247 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:29:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.667 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.668 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.668 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.669 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.669 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.671 2 INFO nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Terminating instance#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.673 2 DEBUG nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:29:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:09.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:09 np0005466030 kernel: tap650cec0d-3a (unregistering): left promiscuous mode
Oct  2 09:29:09 np0005466030 NetworkManager[44960]: <info>  [1759411749.7435] device (tap650cec0d-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:29:09 np0005466030 ovn_controller[129257]: 2025-10-02T13:29:09Z|00892|binding|INFO|Releasing lport 650cec0d-3a37-4324-87bb-85f638f2c4fd from this chassis (sb_readonly=0)
Oct  2 09:29:09 np0005466030 ovn_controller[129257]: 2025-10-02T13:29:09Z|00893|binding|INFO|Setting lport 650cec0d-3a37-4324-87bb-85f638f2c4fd down in Southbound
Oct  2 09:29:09 np0005466030 ovn_controller[129257]: 2025-10-02T13:29:09Z|00894|binding|INFO|Removing iface tap650cec0d-3a ovn-installed in OVS
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.769 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:62:3b 10.100.0.10'], port_security=['fa:16:3e:65:62:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae0586a1-cfba-4e03-bfd2-a14f300878bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=650cec0d-3a37-4324-87bb-85f638f2c4fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:29:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.770 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 650cec0d-3a37-4324-87bb-85f638f2c4fd in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:29:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.772 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:29:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.773 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64658e9f-9287-479d-a49e-d3fa7a220766]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:09 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.773 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:09 np0005466030 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Oct  2 09:29:09 np0005466030 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000dd.scope: Consumed 15.855s CPU time.
Oct  2 09:29:09 np0005466030 systemd-machined[188247]: Machine qemu-100-instance-000000dd terminated.
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.947 2 INFO nova.virt.libvirt.driver [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance destroyed successfully.#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.948 2 DEBUG nova.objects.instance [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.977 2 DEBUG nova.virt.libvirt.vif [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-426050554',id=221,image_ref='e0399c4c-8352-497f-b361-45e672712e68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLILQTOU+IPOd8w0GqrVsN/QbgbwiFJSPjI2JTUbPYQ/ozNt878L7gRDoBhnJsqCVc05+BAE6CcyuObEWMpQFhUuKt2iunDqIotZaYJTz1b891j8Z4tJFAz/OrIq++6nPw==',key_name='tempest-keypair-176326542',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:28:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-hck380nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1344814684',image_owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:28:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=4fe12372-ed4b-40ab-9cf2-dcf304f21c31,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.977 2 DEBUG nova.network.os_vif_util [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.978 2 DEBUG nova.network.os_vif_util [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.978 2 DEBUG os_vif [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap650cec0d-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:29:09 np0005466030 nova_compute[230518]: 2025-10-02 13:29:09.985 2 INFO os_vif [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a')#033[00m
Oct  2 09:29:09 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : haproxy version is 2.8.14-c23fe91
Oct  2 09:29:09 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : path to executable is /usr/sbin/haproxy
Oct  2 09:29:09 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [WARNING]  (318628) : Exiting Master process...
Oct  2 09:29:09 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [WARNING]  (318628) : Exiting Master process...
Oct  2 09:29:09 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [ALERT]    (318628) : Current worker (318630) exited with code 143 (Terminated)
Oct  2 09:29:09 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [WARNING]  (318628) : All workers exited. Exiting... (0)
Oct  2 09:29:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:09 np0005466030 systemd[1]: libpod-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39.scope: Deactivated successfully.
Oct  2 09:29:09 np0005466030 podman[319015]: 2025-10-02 13:29:09.999824774 +0000 UTC m=+0.123299619 container died b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:29:10 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39-userdata-shm.mount: Deactivated successfully.
Oct  2 09:29:10 np0005466030 systemd[1]: var-lib-containers-storage-overlay-7018b0b22f5f7d1f2d4dd3aed7c70f935e42ebe9c211ba7bf0b7ea0fc5157f9a-merged.mount: Deactivated successfully.
Oct  2 09:29:10 np0005466030 nova_compute[230518]: 2025-10-02 13:29:10.247 2 DEBUG nova.compute.manager [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-unplugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:29:10 np0005466030 nova_compute[230518]: 2025-10-02 13:29:10.247 2 DEBUG oslo_concurrency.lockutils [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:10 np0005466030 nova_compute[230518]: 2025-10-02 13:29:10.247 2 DEBUG oslo_concurrency.lockutils [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:10 np0005466030 nova_compute[230518]: 2025-10-02 13:29:10.248 2 DEBUG oslo_concurrency.lockutils [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:10 np0005466030 nova_compute[230518]: 2025-10-02 13:29:10.248 2 DEBUG nova.compute.manager [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] No waiting events found dispatching network-vif-unplugged-650cec0d-3a37-4324-87bb-85f638f2c4fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:29:10 np0005466030 nova_compute[230518]: 2025-10-02 13:29:10.248 2 DEBUG nova.compute.manager [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-unplugged-650cec0d-3a37-4324-87bb-85f638f2c4fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:29:10 np0005466030 podman[319015]: 2025-10-02 13:29:10.50085627 +0000 UTC m=+0.624331115 container cleanup b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:29:11 np0005466030 podman[319070]: 2025-10-02 13:29:11.048523038 +0000 UTC m=+0.526505645 container remove b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.057 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[eea3d006-9b32-499b-b7da-712b024ccc41]: (4, ('Thu Oct  2 01:29:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39)\nb8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39\nThu Oct  2 01:29:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39)\nb8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.060 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[63e69818-3563-4ffa-afc8-037caa05a9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.063 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:29:11 np0005466030 nova_compute[230518]: 2025-10-02 13:29:11.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:11 np0005466030 kernel: tap858f2b6f-80: left promiscuous mode
Oct  2 09:29:11 np0005466030 systemd[1]: libpod-conmon-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39.scope: Deactivated successfully.
Oct  2 09:29:11 np0005466030 nova_compute[230518]: 2025-10-02 13:29:11.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.136 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c40305a-3aa5-49f8-942f-267e1b577ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.164 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[401666fc-d16d-4739-99a7-5c1b97a3e5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.167 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a807b823-5046-4b4a-8003-73731d89d215]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.185 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18a5d22b-2787-467d-8224-4c58138307bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957526, 'reachable_time': 42906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319086, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:11 np0005466030 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.191 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:29:11 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.192 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[30a27b55-705c-4f8b-afd4-2222ab21f36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:29:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:11.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:11 np0005466030 nova_compute[230518]: 2025-10-02 13:29:11.855 2 INFO nova.virt.libvirt.driver [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deleting instance files /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31_del#033[00m
Oct  2 09:29:11 np0005466030 nova_compute[230518]: 2025-10-02 13:29:11.856 2 INFO nova.virt.libvirt.driver [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deletion of /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31_del complete#033[00m
Oct  2 09:29:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.024 2 INFO nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 2.35 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.024 2 DEBUG oslo.service.loopingcall [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.025 2 DEBUG nova.compute.manager [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.025 2 DEBUG nova.network.neutron [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.357 2 DEBUG nova.compute.manager [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.358 2 DEBUG oslo_concurrency.lockutils [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.358 2 DEBUG oslo_concurrency.lockutils [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.358 2 DEBUG oslo_concurrency.lockutils [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.359 2 DEBUG nova.compute.manager [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] No waiting events found dispatching network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:29:12 np0005466030 nova_compute[230518]: 2025-10-02 13:29:12.359 2 WARNING nova.compute.manager [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received unexpected event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd for instance with vm_state active and task_state deleting.#033[00m
Oct  2 09:29:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:13.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:14 np0005466030 nova_compute[230518]: 2025-10-02 13:29:14.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:15.151 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:15.152 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:29:15 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:15.153 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.164 2 DEBUG nova.network.neutron [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.213 2 INFO nova.compute.manager [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 3.19 seconds to deallocate network for instance.#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.236 2 DEBUG nova.compute.manager [req-9e4381bc-a7ec-4def-b7ad-e26242fe3ee7 req-c1a97cf7-c053-4584-8f61-b4fd51766637 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-deleted-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.391 2 INFO nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 0.18 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.392 2 DEBUG nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deleting volume: 4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.643 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.643 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:15 np0005466030 nova_compute[230518]: 2025-10-02 13:29:15.719 2 DEBUG oslo_concurrency.processutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:15.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:15.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1352680470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:16 np0005466030 nova_compute[230518]: 2025-10-02 13:29:16.163 2 DEBUG oslo_concurrency.processutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:16 np0005466030 nova_compute[230518]: 2025-10-02 13:29:16.170 2 DEBUG nova.compute.provider_tree [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:29:16 np0005466030 nova_compute[230518]: 2025-10-02 13:29:16.212 2 DEBUG nova.scheduler.client.report [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:29:16 np0005466030 nova_compute[230518]: 2025-10-02 13:29:16.305 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:16 np0005466030 nova_compute[230518]: 2025-10-02 13:29:16.342 2 INFO nova.scheduler.client.report [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 4fe12372-ed4b-40ab-9cf2-dcf304f21c31#033[00m
Oct  2 09:29:16 np0005466030 nova_compute[230518]: 2025-10-02 13:29:16.417 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:29:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:29:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:29:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:29:17 np0005466030 nova_compute[230518]: 2025-10-02 13:29:17.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:18.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:19 np0005466030 nova_compute[230518]: 2025-10-02 13:29:19.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:20.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e424 e424: 3 total, 3 up, 3 in
Oct  2 09:29:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:22.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:22 np0005466030 nova_compute[230518]: 2025-10-02 13:29:22.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:29:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:24.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:29:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:24 np0005466030 nova_compute[230518]: 2025-10-02 13:29:24.944 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411749.941914, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:29:24 np0005466030 nova_compute[230518]: 2025-10-02 13:29:24.944 2 INFO nova.compute.manager [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:29:24 np0005466030 nova_compute[230518]: 2025-10-02 13:29:24.964 2 DEBUG nova.compute.manager [None req-0ffb723d-cee7-4837-b18d-312f022b85e8 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:29:24 np0005466030 nova_compute[230518]: 2025-10-02 13:29:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:25.989 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:25.990 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:29:25.990 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:26.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:27 np0005466030 nova_compute[230518]: 2025-10-02 13:29:27.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:27.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 e425: 3 total, 3 up, 3 in
Oct  2 09:29:27 np0005466030 podman[319111]: 2025-10-02 13:29:27.80981253 +0000 UTC m=+0.053190550 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:29:27 np0005466030 podman[319110]: 2025-10-02 13:29:27.839077228 +0000 UTC m=+0.086222346 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:29:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:29.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:29 np0005466030 nova_compute[230518]: 2025-10-02 13:29:29.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:31.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:32 np0005466030 nova_compute[230518]: 2025-10-02 13:29:32.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:34 np0005466030 nova_compute[230518]: 2025-10-02 13:29:34.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:35.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:37 np0005466030 nova_compute[230518]: 2025-10-02 13:29:37.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:37.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:39.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:39 np0005466030 podman[319154]: 2025-10-02 13:29:39.792865415 +0000 UTC m=+0.049832174 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 09:29:39 np0005466030 podman[319155]: 2025-10-02 13:29:39.803152667 +0000 UTC m=+0.056460042 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:29:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:40.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:40 np0005466030 nova_compute[230518]: 2025-10-02 13:29:40.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e426 e426: 3 total, 3 up, 3 in
Oct  2 09:29:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:41.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:42 np0005466030 nova_compute[230518]: 2025-10-02 13:29:42.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:43.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:44 np0005466030 nova_compute[230518]: 2025-10-02 13:29:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4132422905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.542 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.694 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.695 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4207MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:29:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:45.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:29:45 np0005466030 nova_compute[230518]: 2025-10-02 13:29:45.833 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:29:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:29:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2373861693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:29:46 np0005466030 nova_compute[230518]: 2025-10-02 13:29:46.252 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:29:46 np0005466030 nova_compute[230518]: 2025-10-02 13:29:46.259 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:29:46 np0005466030 nova_compute[230518]: 2025-10-02 13:29:46.309 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:29:46 np0005466030 nova_compute[230518]: 2025-10-02 13:29:46.346 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:29:46 np0005466030 nova_compute[230518]: 2025-10-02 13:29:46.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:29:47 np0005466030 nova_compute[230518]: 2025-10-02 13:29:47.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:47.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:48.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 e427: 3 total, 3 up, 3 in
Oct  2 09:29:49 np0005466030 nova_compute[230518]: 2025-10-02 13:29:49.347 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:49.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:50.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:50 np0005466030 nova_compute[230518]: 2025-10-02 13:29:50.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:51.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:52.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:52 np0005466030 nova_compute[230518]: 2025-10-02 13:29:52.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:53 np0005466030 nova_compute[230518]: 2025-10-02 13:29:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:53.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:29:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:54.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:29:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:29:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:29:54 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:29:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:55 np0005466030 nova_compute[230518]: 2025-10-02 13:29:55.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:55 np0005466030 nova_compute[230518]: 2025-10-02 13:29:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:55 np0005466030 nova_compute[230518]: 2025-10-02 13:29:55.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:56.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:57 np0005466030 nova_compute[230518]: 2025-10-02 13:29:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:57 np0005466030 nova_compute[230518]: 2025-10-02 13:29:57.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:29:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:58 np0005466030 nova_compute[230518]: 2025-10-02 13:29:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:29:58 np0005466030 nova_compute[230518]: 2025-10-02 13:29:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:29:58 np0005466030 nova_compute[230518]: 2025-10-02 13:29:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:29:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:29:58 np0005466030 nova_compute[230518]: 2025-10-02 13:29:58.077 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:29:58 np0005466030 podman[319370]: 2025-10-02 13:29:58.809804254 +0000 UTC m=+0.055280585 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:29:58 np0005466030 podman[319369]: 2025-10-02 13:29:58.849336113 +0000 UTC m=+0.099675218 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:29:59 np0005466030 ovn_controller[129257]: 2025-10-02T13:29:59Z|00895|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  2 09:29:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:29:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:29:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:29:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:59.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:00 np0005466030 nova_compute[230518]: 2025-10-02 13:30:00.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:01 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 09:30:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:30:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:30:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:02 np0005466030 nova_compute[230518]: 2025-10-02 13:30:02.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:03.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:04.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:05 np0005466030 nova_compute[230518]: 2025-10-02 13:30:05.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:07 np0005466030 nova_compute[230518]: 2025-10-02 13:30:07.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:08.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:09.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:10.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:10 np0005466030 nova_compute[230518]: 2025-10-02 13:30:10.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:10 np0005466030 podman[319467]: 2025-10-02 13:30:10.796312286 +0000 UTC m=+0.050383931 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:30:10 np0005466030 podman[319466]: 2025-10-02 13:30:10.796380079 +0000 UTC m=+0.052664424 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:30:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:11.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:12 np0005466030 nova_compute[230518]: 2025-10-02 13:30:12.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:30:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1655923150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.331564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813331634, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1877, "num_deletes": 257, "total_data_size": 4382210, "memory_usage": 4457920, "flush_reason": "Manual Compaction"}
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813389058, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 2871132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84349, "largest_seqno": 86221, "table_properties": {"data_size": 2863316, "index_size": 4693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16490, "raw_average_key_size": 20, "raw_value_size": 2847513, "raw_average_value_size": 3481, "num_data_blocks": 206, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411651, "oldest_key_time": 1759411651, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 57570 microseconds, and 5580 cpu microseconds.
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.389142) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 2871132 bytes OK
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.389161) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.401764) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.401808) EVENT_LOG_v1 {"time_micros": 1759411813401798, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.401829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4373626, prev total WAL file size 4373626, number of live WAL files 2.
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.403400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323730' seq:72057594037927935, type:22 .. '6C6F676D0033353231' seq:0, type:0; will stop at (end)
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(2803KB)], [174(10MB)]
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813403446, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 14378886, "oldest_snapshot_seqno": -1}
Oct  2 09:30:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:13.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 10655 keys, 14241179 bytes, temperature: kUnknown
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813843736, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14241179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14171699, "index_size": 41719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26693, "raw_key_size": 281101, "raw_average_key_size": 26, "raw_value_size": 13984813, "raw_average_value_size": 1312, "num_data_blocks": 1592, "num_entries": 10655, "num_filter_entries": 10655, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.843983) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14241179 bytes
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.945330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.7 rd, 32.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.0 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 11186, records dropped: 531 output_compression: NoCompression
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.945388) EVENT_LOG_v1 {"time_micros": 1759411813945375, "job": 112, "event": "compaction_finished", "compaction_time_micros": 440362, "compaction_time_cpu_micros": 31450, "output_level": 6, "num_output_files": 1, "total_output_size": 14241179, "num_input_records": 11186, "num_output_records": 10655, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813946115, "job": 112, "event": "table_file_deletion", "file_number": 176}
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813948172, "job": 112, "event": "table_file_deletion", "file_number": 174}
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.403269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:13 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:30:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:14.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:15 np0005466030 nova_compute[230518]: 2025-10-02 13:30:15.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:16.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:17 np0005466030 nova_compute[230518]: 2025-10-02 13:30:17.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:17.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:18.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:20.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:20 np0005466030 nova_compute[230518]: 2025-10-02 13:30:20.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:22 np0005466030 nova_compute[230518]: 2025-10-02 13:30:22.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:30:23.136 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:30:23 np0005466030 nova_compute[230518]: 2025-10-02 13:30:23.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:23 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:30:23.138 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:30:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:24.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.141 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:30:25 np0005466030 nova_compute[230518]: 2025-10-02 13:30:25.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:25.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.990 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:26.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:27 np0005466030 nova_compute[230518]: 2025-10-02 13:30:27.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:27.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:28.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:29 np0005466030 podman[319504]: 2025-10-02 13:30:29.803116537 +0000 UTC m=+0.054437539 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:30:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:30:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:30:29 np0005466030 podman[319503]: 2025-10-02 13:30:29.867349841 +0000 UTC m=+0.121806312 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:30:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:30.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:30 np0005466030 nova_compute[230518]: 2025-10-02 13:30:30.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:32 np0005466030 nova_compute[230518]: 2025-10-02 13:30:32.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:34.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:35 np0005466030 nova_compute[230518]: 2025-10-02 13:30:35.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:37 np0005466030 nova_compute[230518]: 2025-10-02 13:30:37.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:37.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:38.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:39.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:40.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:40 np0005466030 nova_compute[230518]: 2025-10-02 13:30:40.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:41 np0005466030 podman[319548]: 2025-10-02 13:30:41.798870788 +0000 UTC m=+0.052662003 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:30:41 np0005466030 podman[319550]: 2025-10-02 13:30:41.804390291 +0000 UTC m=+0.053606373 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:30:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:42.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:42 np0005466030 nova_compute[230518]: 2025-10-02 13:30:42.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:43.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:44 np0005466030 nova_compute[230518]: 2025-10-02 13:30:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:44.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2790356428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.513 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.693 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.695 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4216MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.695 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:30:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:30:45 np0005466030 nova_compute[230518]: 2025-10-02 13:30:45.951 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:30:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:46.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:30:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1118045396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:30:46 np0005466030 nova_compute[230518]: 2025-10-02 13:30:46.463 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:30:46 np0005466030 nova_compute[230518]: 2025-10-02 13:30:46.469 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:30:46 np0005466030 nova_compute[230518]: 2025-10-02 13:30:46.490 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:30:46 np0005466030 nova_compute[230518]: 2025-10-02 13:30:46.492 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:30:46 np0005466030 nova_compute[230518]: 2025-10-02 13:30:46.493 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:30:47 np0005466030 nova_compute[230518]: 2025-10-02 13:30:47.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:47.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:48.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:48 np0005466030 nova_compute[230518]: 2025-10-02 13:30:48.494 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:48 np0005466030 nova_compute[230518]: 2025-10-02 13:30:48.495 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:30:49 np0005466030 nova_compute[230518]: 2025-10-02 13:30:49.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:49.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:50.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:50 np0005466030 nova_compute[230518]: 2025-10-02 13:30:50.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:52.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:52 np0005466030 nova_compute[230518]: 2025-10-02 13:30:52.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:53 np0005466030 nova_compute[230518]: 2025-10-02 13:30:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:30:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104312888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:30:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:53.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:54.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:30:55 np0005466030 nova_compute[230518]: 2025-10-02 13:30:55.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:55.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:56 np0005466030 nova_compute[230518]: 2025-10-02 13:30:56.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:56 np0005466030 nova_compute[230518]: 2025-10-02 13:30:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:30:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:56.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:30:57 np0005466030 nova_compute[230518]: 2025-10-02 13:30:57.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:30:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:58.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:59 np0005466030 nova_compute[230518]: 2025-10-02 13:30:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:30:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:30:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:30:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:30:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:00 np0005466030 nova_compute[230518]: 2025-10-02 13:31:00.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:00 np0005466030 nova_compute[230518]: 2025-10-02 13:31:00.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:00 np0005466030 nova_compute[230518]: 2025-10-02 13:31:00.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:31:00 np0005466030 nova_compute[230518]: 2025-10-02 13:31:00.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:31:00 np0005466030 nova_compute[230518]: 2025-10-02 13:31:00.083 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:31:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:00.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:00 np0005466030 nova_compute[230518]: 2025-10-02 13:31:00.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:00 np0005466030 podman[319634]: 2025-10-02 13:31:00.830489354 +0000 UTC m=+0.063825592 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  2 09:31:00 np0005466030 podman[319633]: 2025-10-02 13:31:00.890991722 +0000 UTC m=+0.134942754 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:31:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.947404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861947432, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 749, "num_deletes": 251, "total_data_size": 1374628, "memory_usage": 1395280, "flush_reason": "Manual Compaction"}
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861966047, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 907628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86226, "largest_seqno": 86970, "table_properties": {"data_size": 903979, "index_size": 1492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8226, "raw_average_key_size": 19, "raw_value_size": 896732, "raw_average_value_size": 2119, "num_data_blocks": 65, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411813, "oldest_key_time": 1759411813, "file_creation_time": 1759411861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 18690 microseconds, and 3354 cpu microseconds.
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.966090) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 907628 bytes OK
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.966109) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.968585) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.968595) EVENT_LOG_v1 {"time_micros": 1759411861968592, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.968610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1370658, prev total WAL file size 1370658, number of live WAL files 2.
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.969225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(886KB)], [177(13MB)]
Oct  2 09:31:01 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861969256, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 15148807, "oldest_snapshot_seqno": -1}
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 10562 keys, 13195834 bytes, temperature: kUnknown
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862078440, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13195834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13127875, "index_size": 40454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 279808, "raw_average_key_size": 26, "raw_value_size": 12943411, "raw_average_value_size": 1225, "num_data_blocks": 1532, "num_entries": 10562, "num_filter_entries": 10562, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.078700) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13195834 bytes
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.079846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.7 rd, 120.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.6 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(31.2) write-amplify(14.5) OK, records in: 11078, records dropped: 516 output_compression: NoCompression
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.079867) EVENT_LOG_v1 {"time_micros": 1759411862079857, "job": 114, "event": "compaction_finished", "compaction_time_micros": 109252, "compaction_time_cpu_micros": 29832, "output_level": 6, "num_output_files": 1, "total_output_size": 13195834, "num_input_records": 11078, "num_output_records": 10562, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862080261, "job": 114, "event": "table_file_deletion", "file_number": 179}
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862083482, "job": 114, "event": "table_file_deletion", "file_number": 177}
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.969146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:31:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:02.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:02.357 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:31:02 np0005466030 nova_compute[230518]: 2025-10-02 13:31:02.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:02 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:02.359 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:31:02 np0005466030 nova_compute[230518]: 2025-10-02 13:31:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:02 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:31:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:03.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:04 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:31:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:04.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:05 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:05.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:05 np0005466030 nova_compute[230518]: 2025-10-02 13:31:05.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:05.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:07 np0005466030 nova_compute[230518]: 2025-10-02 13:31:07.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:08.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:09.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:10.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:10 np0005466030 nova_compute[230518]: 2025-10-02 13:31:10.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:11.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:12 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:31:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:12.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:12 np0005466030 nova_compute[230518]: 2025-10-02 13:31:12.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:12 np0005466030 podman[319981]: 2025-10-02 13:31:12.8021991 +0000 UTC m=+0.056310337 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:31:12 np0005466030 podman[319980]: 2025-10-02 13:31:12.83022821 +0000 UTC m=+0.083499431 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  2 09:31:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:13.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:14.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:15 np0005466030 nova_compute[230518]: 2025-10-02 13:31:15.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:31:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:15.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:31:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:16.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:17 np0005466030 nova_compute[230518]: 2025-10-02 13:31:17.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:18.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:19.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:20.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:20 np0005466030 nova_compute[230518]: 2025-10-02 13:31:20.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:21.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:22 np0005466030 nova_compute[230518]: 2025-10-02 13:31:22.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:23.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:24.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:25 np0005466030 nova_compute[230518]: 2025-10-02 13:31:25.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:27 np0005466030 nova_compute[230518]: 2025-10-02 13:31:27.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:27.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 e428: 3 total, 3 up, 3 in
Oct  2 09:31:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:28.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:29.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:30 np0005466030 nova_compute[230518]: 2025-10-02 13:31:30.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:31 np0005466030 podman[320022]: 2025-10-02 13:31:31.825507233 +0000 UTC m=+0.076302024 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:31:31 np0005466030 podman[320021]: 2025-10-02 13:31:31.854016578 +0000 UTC m=+0.110229529 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  2 09:31:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:31.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:32 np0005466030 nova_compute[230518]: 2025-10-02 13:31:32.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.029 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.029 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.045 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.121 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.122 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.130 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.130 2 INFO nova.compute.claims [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  2 09:31:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:34.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.247 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:31:34 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/541340958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.687 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.693 2 DEBUG nova.compute.provider_tree [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.706 2 DEBUG nova.scheduler.client.report [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.725 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.725 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.775 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.775 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.792 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.810 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.848 2 INFO nova.virt.block_device [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Booting with volume b03ecca0-5e5d-47a6-a97b-d3273a126768 at /dev/vda#033[00m
Oct  2 09:31:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.983 2 DEBUG os_brick.utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.985 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.995 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.995 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ae80cf8b-7c12-4e43-9935-dfb11630735d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:34 np0005466030 nova_compute[230518]: 2025-10-02 13:31:34.997 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.004 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.005 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6fbdb0-fb32-40e4-a9a3-3fa9e7c1f623]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.006 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.014 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.014 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0c68c66c-a4d1-4714-bf2b-37a5e36f11f5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.015 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[945defd0-a019-4ff6-b0c8-e1deb6c97c96]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.016 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.051 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.053 2 DEBUG os_brick.initiator.connectors.lightos [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG os_brick.initiator.connectors.lightos [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG os_brick.initiator.connectors.lightos [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG os_brick.utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG nova.virt.block_device [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating existing volume attachment record: a5d24a42-20c2-4952-8e71-9fac1f9674e2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.091 2 DEBUG nova.policy [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:31:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3826796133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:31:35 np0005466030 nova_compute[230518]: 2025-10-02 13:31:35.888 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Successfully created port: bdd118f1-3b0d-4709-847a-90adbb7b95f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  2 09:31:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:35.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.017 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.018 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.019 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Creating image(s)#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.019 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Ensure instance console log exists: /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.662 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Successfully updated port: bdd118f1-3b0d-4709-847a-90adbb7b95f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.695 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.696 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.697 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.775 2 DEBUG nova.compute.manager [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.776 2 DEBUG nova.compute.manager [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing instance network info cache due to event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:31:36 np0005466030 nova_compute[230518]: 2025-10-02 13:31:36.777 2 DEBUG oslo_concurrency.lockutils [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.069 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.793 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.839 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.840 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance network_info: |[{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.840 2 DEBUG oslo_concurrency.lockutils [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.840 2 DEBUG nova.network.neutron [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.843 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start _get_guest_xml network_info=[{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b03ecca0-5e5d-47a6-a97b-d3273a126768', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b03ecca0-5e5d-47a6-a97b-d3273a126768', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'attached_at': '', 'detached_at': '', 'volume_id': 'b03ecca0-5e5d-47a6-a97b-d3273a126768', 'serial': 'b03ecca0-5e5d-47a6-a97b-d3273a126768'}, 'boot_index': 0, 'attachment_id': 'a5d24a42-20c2-4952-8e71-9fac1f9674e2', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.848 2 WARNING nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.853 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.853 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.859 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.860 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.895 2 DEBUG nova.storage.rbd_utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:31:37 np0005466030 nova_compute[230518]: 2025-10-02 13:31:37.899 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:37.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:38.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  2 09:31:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281413989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.319 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.355 2 DEBUG nova.virt.libvirt.vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:31:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1205762778',display_name='tempest-TestVolumeBootPattern-server-1205762778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1205762778',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-00z9qk6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:31:34Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=6226ed9a-8df2-43ad-b76c-e27e22f8199c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.356 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.357 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.359 2 DEBUG nova.objects.instance [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6226ed9a-8df2-43ad-b76c-e27e22f8199c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.374 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] End _get_guest_xml xml=<domain type="kvm">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <uuid>6226ed9a-8df2-43ad-b76c-e27e22f8199c</uuid>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <name>instance-000000e0</name>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <memory>131072</memory>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <vcpu>1</vcpu>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <metadata>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <nova:name>tempest-TestVolumeBootPattern-server-1205762778</nova:name>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <nova:creationTime>2025-10-02 13:31:37</nova:creationTime>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <nova:flavor name="m1.nano">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:memory>128</nova:memory>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:disk>1</nova:disk>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:swap>0</nova:swap>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:ephemeral>0</nova:ephemeral>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:vcpus>1</nova:vcpus>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      </nova:flavor>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <nova:owner>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      </nova:owner>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <nova:ports>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <nova:port uuid="bdd118f1-3b0d-4709-847a-90adbb7b95f6">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        </nova:port>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      </nova:ports>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </nova:instance>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  </metadata>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <sysinfo type="smbios">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <system>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <entry name="manufacturer">RDO</entry>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <entry name="product">OpenStack Compute</entry>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <entry name="serial">6226ed9a-8df2-43ad-b76c-e27e22f8199c</entry>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <entry name="uuid">6226ed9a-8df2-43ad-b76c-e27e22f8199c</entry>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <entry name="family">Virtual Machine</entry>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </system>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  </sysinfo>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <os>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <boot dev="hd"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <smbios mode="sysinfo"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  </os>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <features>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <acpi/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <apic/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <vmcoreinfo/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  </features>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <clock offset="utc">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <timer name="pit" tickpolicy="delay"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <timer name="hpet" present="no"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  </clock>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <cpu mode="custom" match="exact">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <model>Nehalem</model>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <topology sockets="1" cores="1" threads="1"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  </cpu>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  <devices>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <disk type="network" device="cdrom">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <driver type="raw" cache="none"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="vms/6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <target dev="sda" bus="sata"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <disk type="network" device="disk">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <source protocol="rbd" name="volumes/volume-b03ecca0-5e5d-47a6-a97b-d3273a126768">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <host name="192.168.122.100" port="6789"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <host name="192.168.122.102" port="6789"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <host name="192.168.122.101" port="6789"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      </source>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <auth username="openstack">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:        <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      </auth>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <target dev="vda" bus="virtio"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <serial>b03ecca0-5e5d-47a6-a97b-d3273a126768</serial>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </disk>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <interface type="ethernet">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <mac address="fa:16:3e:8a:2e:41"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <driver name="vhost" rx_queue_size="512"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <mtu size="1442"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <target dev="tapbdd118f1-3b"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </interface>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <serial type="pty">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <log file="/var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/console.log" append="off"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </serial>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <video>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <model type="virtio"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </video>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <input type="tablet" bus="usb"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <rng model="virtio">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <backend model="random">/dev/urandom</backend>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </rng>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="pci" model="pcie-root-port"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <controller type="usb" index="0"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    <memballoon model="virtio">
Oct  2 09:31:38 np0005466030 nova_compute[230518]:      <stats period="10"/>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:    </memballoon>
Oct  2 09:31:38 np0005466030 nova_compute[230518]:  </devices>
Oct  2 09:31:38 np0005466030 nova_compute[230518]: </domain>
Oct  2 09:31:38 np0005466030 nova_compute[230518]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.376 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Preparing to wait for external event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.376 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.377 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.377 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.377 2 DEBUG nova.virt.libvirt.vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:31:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1205762778',display_name='tempest-TestVolumeBootPattern-server-1205762778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1205762778',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-00z9qk6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:31:34Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=6226ed9a-8df2-43ad-b76c-e27e22f8199c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.378 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.378 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.379 2 DEBUG os_vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdd118f1-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdd118f1-3b, col_values=(('external_ids', {'iface-id': 'bdd118f1-3b0d-4709-847a-90adbb7b95f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:2e:41', 'vm-uuid': '6226ed9a-8df2-43ad-b76c-e27e22f8199c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:38 np0005466030 NetworkManager[44960]: <info>  [1759411898.3920] manager: (tapbdd118f1-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.399 2 INFO os_vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b')#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.447 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.448 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.448 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:8a:2e:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.448 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Using config drive#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.475 2 DEBUG nova.storage.rbd_utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.845 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Creating config drive at /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.853 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobx38oc7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:38 np0005466030 nova_compute[230518]: 2025-10-02 13:31:38.994 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobx38oc7" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.025 2 DEBUG nova.storage.rbd_utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.032 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.093 2 DEBUG nova.network.neutron [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated VIF entry in instance network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.095 2 DEBUG nova.network.neutron [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.112 2 DEBUG oslo_concurrency.lockutils [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.266 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.267 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deleting local config drive /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config because it was imported into RBD.#033[00m
Oct  2 09:31:39 np0005466030 kernel: tapbdd118f1-3b: entered promiscuous mode
Oct  2 09:31:39 np0005466030 NetworkManager[44960]: <info>  [1759411899.3176] manager: (tapbdd118f1-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Oct  2 09:31:39 np0005466030 ovn_controller[129257]: 2025-10-02T13:31:39Z|00896|binding|INFO|Claiming lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 for this chassis.
Oct  2 09:31:39 np0005466030 ovn_controller[129257]: 2025-10-02T13:31:39Z|00897|binding|INFO|bdd118f1-3b0d-4709-847a-90adbb7b95f6: Claiming fa:16:3e:8a:2e:41 10.100.0.8
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.329 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.330 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.332 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5#033[00m
Oct  2 09:31:39 np0005466030 ovn_controller[129257]: 2025-10-02T13:31:39Z|00898|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 ovn-installed in OVS
Oct  2 09:31:39 np0005466030 ovn_controller[129257]: 2025-10-02T13:31:39Z|00899|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 up in Southbound
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:39 np0005466030 systemd-udevd[320207]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c199a152-8911-4a58-8ef6-8d86e1ef8a5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.349 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.351 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0924dd-f877-44c4-8104-87924ef103b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56b4fe11-2236-402f-ba31-ff49c304ac70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 systemd-machined[188247]: New machine qemu-101-instance-000000e0.
Oct  2 09:31:39 np0005466030 NetworkManager[44960]: <info>  [1759411899.3622] device (tapbdd118f1-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  2 09:31:39 np0005466030 NetworkManager[44960]: <info>  [1759411899.3633] device (tapbdd118f1-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.364 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7e2856-6f8a-489a-899b-b8550a8d2e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 systemd[1]: Started Virtual Machine qemu-101-instance-000000e0.
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.389 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95b5ecc9-d23c-4ed8-a1c2-77a5843ab35e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.418 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[772eae87-263d-4cbe-9e91-fb762431d56b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5586408-ef27-4f5a-b2b8-97eb4cc5cd71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 NetworkManager[44960]: <info>  [1759411899.4249] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.454 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a695ae2a-939b-470f-8c5b-3fb25d88d31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.457 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e77f2b6b-5425-40f1-b1de-1c72254a4deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 NetworkManager[44960]: <info>  [1759411899.4779] device (tap858f2b6f-80): carrier: link connected
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.487 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d65b6154-b02e-43d7-9504-44df415e8dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68acdd18-4827-4350-8256-4091917b3bc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976303, 'reachable_time': 15731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320240, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db1e94a4-6765-4cdb-bedd-876350ce1778]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 976303, 'tstamp': 976303}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320241, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.536 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91b6aee5-a403-4641-b8d8-d22680cd0b1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976303, 'reachable_time': 15731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320242, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.566 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64e86ea8-dd81-41e0-a449-780d5f4a8b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.615 2 DEBUG nova.compute.manager [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG oslo_concurrency.lockutils [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG oslo_concurrency.lockutils [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG oslo_concurrency.lockutils [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG nova.compute.manager [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Processing event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.623 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bafc3bb8-d1e1-40ef-ad22-6710af6d3dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.625 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.625 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:39 np0005466030 NetworkManager[44960]: <info>  [1759411899.6277] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct  2 09:31:39 np0005466030 kernel: tap858f2b6f-80: entered promiscuous mode
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.636 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:39 np0005466030 ovn_controller[129257]: 2025-10-02T13:31:39Z|00900|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct  2 09:31:39 np0005466030 nova_compute[230518]: 2025-10-02 13:31:39.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.652 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.653 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc40855-54bf-4c76-b21f-375f99ace0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.654 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: global
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    log         /dev/log local0 debug
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    user        root
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    group       root
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    maxconn     1024
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    daemon
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: defaults
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    log global
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    mode http
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    option httplog
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    option dontlognull
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    option http-server-close
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    option forwardfor
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    retries                 3
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    timeout http-request    30s
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    timeout connect         30s
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    timeout client          32s
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    timeout server          32s
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    timeout http-keep-alive 30s
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: listen listener
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    bind 169.254.169.254:80
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    server metadata /var/lib/neutron/metadata_proxy
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]:    http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  2 09:31:39 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.654 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  2 09:31:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:40 np0005466030 podman[320317]: 2025-10-02 13:31:40.064163687 +0000 UTC m=+0.083754158 container create 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:31:40 np0005466030 podman[320317]: 2025-10-02 13:31:40.005063263 +0000 UTC m=+0.024653764 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct  2 09:31:40 np0005466030 systemd[1]: Started libpod-conmon-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38.scope.
Oct  2 09:31:40 np0005466030 systemd[1]: Started libcrun container.
Oct  2 09:31:40 np0005466030 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca3a05d9c7d9e930b8b6730f8fdbb361ef758594c9818368b42030a951cdf9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  2 09:31:40 np0005466030 podman[320317]: 2025-10-02 13:31:40.155761209 +0000 UTC m=+0.175351710 container init 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:31:40 np0005466030 podman[320317]: 2025-10-02 13:31:40.161715067 +0000 UTC m=+0.181305538 container start 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct  2 09:31:40 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : New worker (320338) forked
Oct  2 09:31:40 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : Loading success.
Oct  2 09:31:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:40.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.294 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411900.29387, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.295 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Started (Lifecycle Event)#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.297 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.300 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.304 2 INFO nova.virt.libvirt.driver [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance spawned successfully.#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.304 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.317 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.319 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.327 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.327 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.328 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.328 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.329 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.329 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.340 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.340 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411900.2949874, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.340 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Paused (Lifecycle Event)#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.364 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.368 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411900.299782, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.368 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Resumed (Lifecycle Event)#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.390 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.394 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.406 2 INFO nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 4.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.407 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.416 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.466 2 INFO nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 6.37 seconds to build instance.#033[00m
Oct  2 09:31:40 np0005466030 nova_compute[230518]: 2025-10-02 13:31:40.486 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:41 np0005466030 nova_compute[230518]: 2025-10-02 13:31:41.738 2 DEBUG nova.compute.manager [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:31:41 np0005466030 nova_compute[230518]: 2025-10-02 13:31:41.738 2 DEBUG oslo_concurrency.lockutils [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:41 np0005466030 nova_compute[230518]: 2025-10-02 13:31:41.739 2 DEBUG oslo_concurrency.lockutils [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:41 np0005466030 nova_compute[230518]: 2025-10-02 13:31:41.739 2 DEBUG oslo_concurrency.lockutils [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:41 np0005466030 nova_compute[230518]: 2025-10-02 13:31:41.739 2 DEBUG nova.compute.manager [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:31:41 np0005466030 nova_compute[230518]: 2025-10-02 13:31:41.740 2 WARNING nova.compute.manager [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received unexpected event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with vm_state active and task_state None.#033[00m
Oct  2 09:31:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:42.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:42 np0005466030 nova_compute[230518]: 2025-10-02 13:31:42.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:43 np0005466030 nova_compute[230518]: 2025-10-02 13:31:43.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:43 np0005466030 podman[320348]: 2025-10-02 13:31:43.810629092 +0000 UTC m=+0.057360501 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:31:43 np0005466030 podman[320349]: 2025-10-02 13:31:43.843533704 +0000 UTC m=+0.089387195 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:31:43 np0005466030 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG nova.compute.manager [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:31:43 np0005466030 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG nova.compute.manager [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing instance network info cache due to event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:31:43 np0005466030 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG oslo_concurrency.lockutils [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:31:43 np0005466030 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG oslo_concurrency.lockutils [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:31:43 np0005466030 nova_compute[230518]: 2025-10-02 13:31:43.865 2 DEBUG nova.network.neutron [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:31:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:43.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:44.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:44 np0005466030 nova_compute[230518]: 2025-10-02 13:31:44.850 2 DEBUG nova.network.neutron [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated VIF entry in instance network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:31:44 np0005466030 nova_compute[230518]: 2025-10-02 13:31:44.851 2 DEBUG nova.network.neutron [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:31:44 np0005466030 nova_compute[230518]: 2025-10-02 13:31:44.871 2 DEBUG oslo_concurrency.lockutils [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:31:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.085 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.086 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:45 np0005466030 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct  2 09:31:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:31:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/349842219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.528 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.819 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.820 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  2 09:31:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct  2 09:31:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:45.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.965 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.966 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4046MB free_disk=20.987987518310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.966 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:31:45 np0005466030 nova_compute[230518]: 2025-10-02 13:31:45.967 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.038 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 6226ed9a-8df2-43ad-b76c-e27e22f8199c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.039 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.039 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.107 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.120 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.121 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.133 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.154 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.199 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:31:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:46.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:31:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3626880089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.647 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.652 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.675 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.696 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:31:46 np0005466030 nova_compute[230518]: 2025-10-02 13:31:46.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:31:47 np0005466030 nova_compute[230518]: 2025-10-02 13:31:47.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:48 np0005466030 nova_compute[230518]: 2025-10-02 13:31:48.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:49 np0005466030 nova_compute[230518]: 2025-10-02 13:31:49.696 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:49 np0005466030 nova_compute[230518]: 2025-10-02 13:31:49.697 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:31:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:49.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:50.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:51 np0005466030 nova_compute[230518]: 2025-10-02 13:31:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:51.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:52 np0005466030 nova_compute[230518]: 2025-10-02 13:31:52.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:31:52Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:2e:41 10.100.0.8
Oct  2 09:31:52 np0005466030 ovn_controller[129257]: 2025-10-02T13:31:52Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:2e:41 10.100.0.8
Oct  2 09:31:53 np0005466030 nova_compute[230518]: 2025-10-02 13:31:53.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:53.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:54 np0005466030 nova_compute[230518]: 2025-10-02 13:31:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:54.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:56.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:57 np0005466030 nova_compute[230518]: 2025-10-02 13:31:57.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:57 np0005466030 nova_compute[230518]: 2025-10-02 13:31:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:31:57 np0005466030 nova_compute[230518]: 2025-10-02 13:31:57.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:57.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:31:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:31:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:58.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:31:58 np0005466030 nova_compute[230518]: 2025-10-02 13:31:58.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:31:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:31:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:31:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:31:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:59.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:00 np0005466030 nova_compute[230518]: 2025-10-02 13:32:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:00.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:01.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:32:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:02.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.436 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.437 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.437 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.437 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6226ed9a-8df2-43ad-b76c-e27e22f8199c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:32:02 np0005466030 nova_compute[230518]: 2025-10-02 13:32:02.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:02 np0005466030 podman[320433]: 2025-10-02 13:32:02.813134901 +0000 UTC m=+0.057680550 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:32:02 np0005466030 podman[320432]: 2025-10-02 13:32:02.839612702 +0000 UTC m=+0.087749824 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  2 09:32:03 np0005466030 nova_compute[230518]: 2025-10-02 13:32:03.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:04.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:05 np0005466030 nova_compute[230518]: 2025-10-02 13:32:05.157 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:32:05 np0005466030 nova_compute[230518]: 2025-10-02 13:32:05.206 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:32:05 np0005466030 nova_compute[230518]: 2025-10-02 13:32:05.206 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  2 09:32:05 np0005466030 nova_compute[230518]: 2025-10-02 13:32:05.207 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:05 np0005466030 nova_compute[230518]: 2025-10-02 13:32:05.207 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:32:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:32:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:32:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:32:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:32:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:05.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:06.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:07 np0005466030 nova_compute[230518]: 2025-10-02 13:32:07.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:07.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:08.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:08 np0005466030 nova_compute[230518]: 2025-10-02 13:32:08.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:09 np0005466030 nova_compute[230518]: 2025-10-02 13:32:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:11.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:12 np0005466030 nova_compute[230518]: 2025-10-02 13:32:12.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:12 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:12Z|00901|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:13 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:32:13 np0005466030 nova_compute[230518]: 2025-10-02 13:32:13.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:14.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:14 np0005466030 podman[320608]: 2025-10-02 13:32:14.830127859 +0000 UTC m=+0.063886056 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 09:32:14 np0005466030 podman[320609]: 2025-10-02 13:32:14.830247832 +0000 UTC m=+0.063205133 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:32:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:17 np0005466030 nova_compute[230518]: 2025-10-02 13:32:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.107 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.108 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.306 2 DEBUG nova.compute.manager [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.307 2 DEBUG nova.compute.manager [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing instance network info cache due to event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.307 2 DEBUG oslo_concurrency.lockutils [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  2 09:32:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.308 2 DEBUG oslo_concurrency.lockutils [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  2 09:32:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:18.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.308 2 DEBUG nova.network.neutron [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.373 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.373 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.373 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.374 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.374 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.375 2 INFO nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Terminating instance#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.376 2 DEBUG nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  2 09:32:18 np0005466030 kernel: tapbdd118f1-3b (unregistering): left promiscuous mode
Oct  2 09:32:18 np0005466030 NetworkManager[44960]: <info>  [1759411938.4451] device (tapbdd118f1-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00902|binding|INFO|Releasing lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 from this chassis (sb_readonly=0)
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00903|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 down in Southbound
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00904|binding|INFO|Removing iface tapbdd118f1-3b ovn-installed in OVS
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.461 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.463 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.464 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.466 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e796ecfe-db22-403d-b57b-de3b71681ffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.466 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Oct  2 09:32:18 np0005466030 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000e0.scope: Consumed 14.845s CPU time.
Oct  2 09:32:18 np0005466030 systemd-machined[188247]: Machine qemu-101-instance-000000e0 terminated.
Oct  2 09:32:18 np0005466030 kernel: tapbdd118f1-3b: entered promiscuous mode
Oct  2 09:32:18 np0005466030 NetworkManager[44960]: <info>  [1759411938.6023] manager: (tapbdd118f1-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Oct  2 09:32:18 np0005466030 systemd-udevd[320655]: Network interface NamePolicy= disabled on kernel command line.
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00905|binding|INFO|Claiming lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 for this chassis.
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00906|binding|INFO|bdd118f1-3b0d-4709-847a-90adbb7b95f6: Claiming fa:16:3e:8a:2e:41 10.100.0.8
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.616 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:32:18 np0005466030 kernel: tapbdd118f1-3b (unregistering): left promiscuous mode
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00907|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 ovn-installed in OVS
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00908|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 up in Southbound
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : haproxy version is 2.8.14-c23fe91
Oct  2 09:32:18 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : path to executable is /usr/sbin/haproxy
Oct  2 09:32:18 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [WARNING]  (320336) : Exiting Master process...
Oct  2 09:32:18 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [WARNING]  (320336) : Exiting Master process...
Oct  2 09:32:18 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [ALERT]    (320336) : Current worker (320338) exited with code 143 (Terminated)
Oct  2 09:32:18 np0005466030 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [WARNING]  (320336) : All workers exited. Exiting... (0)
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00909|binding|INFO|Releasing lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 from this chassis (sb_readonly=0)
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00910|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 down in Southbound
Oct  2 09:32:18 np0005466030 ovn_controller[129257]: 2025-10-02T13:32:18Z|00911|binding|INFO|Removing iface tapbdd118f1-3b ovn-installed in OVS
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 systemd[1]: libpod-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38.scope: Deactivated successfully.
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 podman[320672]: 2025-10-02 13:32:18.643623017 +0000 UTC m=+0.067279622 container died 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.645 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.647 2 INFO nova.virt.libvirt.driver [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance destroyed successfully.#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.648 2 DEBUG nova.objects.instance [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 6226ed9a-8df2-43ad-b76c-e27e22f8199c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.665 2 DEBUG nova.virt.libvirt.vif [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:31:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1205762778',display_name='tempest-TestVolumeBootPattern-server-1205762778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1205762778',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:31:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-00z9qk6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:31:40Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=6226ed9a-8df2-43ad-b76c-e27e22f8199c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.665 2 DEBUG nova.network.os_vif_util [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.666 2 DEBUG nova.network.os_vif_util [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.666 2 DEBUG os_vif [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdd118f1-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.673 2 INFO os_vif [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b')#033[00m
Oct  2 09:32:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38-userdata-shm.mount: Deactivated successfully.
Oct  2 09:32:18 np0005466030 systemd[1]: var-lib-containers-storage-overlay-2ca3a05d9c7d9e930b8b6730f8fdbb361ef758594c9818368b42030a951cdf9e-merged.mount: Deactivated successfully.
Oct  2 09:32:18 np0005466030 podman[320672]: 2025-10-02 13:32:18.692073167 +0000 UTC m=+0.115729762 container cleanup 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:32:18 np0005466030 systemd[1]: libpod-conmon-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38.scope: Deactivated successfully.
Oct  2 09:32:18 np0005466030 podman[320717]: 2025-10-02 13:32:18.781437859 +0000 UTC m=+0.056450231 container remove 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.790 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[588c42a1-476a-4ae1-993a-83dc859fc4e4]: (4, ('Thu Oct  2 01:32:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38)\n87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38\nThu Oct  2 01:32:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38)\n87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.794 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[323267e5-ac8c-4931-86a9-842042f9b1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.796 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 kernel: tap858f2b6f-80: left promiscuous mode
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.810 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[35a84186-1729-495a-bfb7-555094e0b441]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[324a0839-ee22-43a2-bb5d-36afe7614ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.850 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[537368a6-d8c8-4d4c-b6db-dd2091320a0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[90597b1d-584c-4c4e-833f-9fdfa323f0b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976297, 'reachable_time': 19626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320734, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.880 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.881 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f04c812d-5ea0-4e89-a71e-af4bc33f2e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.882 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.883 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.884 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0c2fc5-2f25-4e68-8525-f7db97a5a5cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.885 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.886 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  2 09:32:18 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9c0f28-33db-4936-9bb4-b6b51173cce3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.904 2 INFO nova.virt.libvirt.driver [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deleting instance files /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c_del#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.905 2 INFO nova.virt.libvirt.driver [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deletion of /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c_del complete#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.975 2 INFO nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.975 2 DEBUG oslo.service.loopingcall [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.976 2 DEBUG nova.compute.manager [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  2 09:32:18 np0005466030 nova_compute[230518]: 2025-10-02 13:32:18.976 2 DEBUG nova.network.neutron [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  2 09:32:19 np0005466030 nova_compute[230518]: 2025-10-02 13:32:19.324 2 DEBUG nova.compute.manager [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-unplugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:32:19 np0005466030 nova_compute[230518]: 2025-10-02 13:32:19.325 2 DEBUG oslo_concurrency.lockutils [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:19 np0005466030 nova_compute[230518]: 2025-10-02 13:32:19.325 2 DEBUG oslo_concurrency.lockutils [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:19 np0005466030 nova_compute[230518]: 2025-10-02 13:32:19.327 2 DEBUG oslo_concurrency.lockutils [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:19 np0005466030 nova_compute[230518]: 2025-10-02 13:32:19.327 2 DEBUG nova.compute.manager [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-unplugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:32:19 np0005466030 nova_compute[230518]: 2025-10-02 13:32:19.328 2 DEBUG nova.compute.manager [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-unplugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  2 09:32:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:32:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:20.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:20 np0005466030 nova_compute[230518]: 2025-10-02 13:32:20.338 2 DEBUG nova.network.neutron [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:32:20 np0005466030 nova_compute[230518]: 2025-10-02 13:32:20.356 2 INFO nova.compute.manager [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 1.38 seconds to deallocate network for instance.#033[00m
Oct  2 09:32:20 np0005466030 nova_compute[230518]: 2025-10-02 13:32:20.576 2 INFO nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Oct  2 09:32:20 np0005466030 nova_compute[230518]: 2025-10-02 13:32:20.614 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:20 np0005466030 nova_compute[230518]: 2025-10-02 13:32:20.615 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:20 np0005466030 nova_compute[230518]: 2025-10-02 13:32:20.656 2 DEBUG oslo_concurrency.processutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:32:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:32:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3910490391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.097 2 DEBUG oslo_concurrency.processutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.106 2 DEBUG nova.compute.provider_tree [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.136 2 DEBUG nova.scheduler.client.report [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.172 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.193 2 DEBUG nova.network.neutron [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated VIF entry in instance network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.194 2 DEBUG nova.network.neutron [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.196 2 INFO nova.scheduler.client.report [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 6226ed9a-8df2-43ad-b76c-e27e22f8199c#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.212 2 DEBUG oslo_concurrency.lockutils [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.257 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.395 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.397 2 WARNING nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received unexpected event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.397 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.397 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.397 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.398 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.398 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.398 2 WARNING nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received unexpected event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with vm_state deleted and task_state None.#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.398 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-deleted-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.398 2 INFO nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Neutron deleted interface bdd118f1-3b0d-4709-847a-90adbb7b95f6; detaching it from the instance and deleting it from the info cache#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.399 2 DEBUG nova.network.neutron [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  2 09:32:21 np0005466030 nova_compute[230518]: 2025-10-02 13:32:21.401 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Detach interface failed, port_id=bdd118f1-3b0d-4709-847a-90adbb7b95f6, reason: Instance 6226ed9a-8df2-43ad-b76c-e27e22f8199c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  2 09:32:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:22.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:22 np0005466030 nova_compute[230518]: 2025-10-02 13:32:22.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:23 np0005466030 nova_compute[230518]: 2025-10-02 13:32:23.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct  2 09:32:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct  2 09:32:24 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:24.110 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:32:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:24.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e429 e429: 3 total, 3 up, 3 in
Oct  2 09:32:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:25.992 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:25.992 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:32:25.992 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:26.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:26.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:27 np0005466030 nova_compute[230518]: 2025-10-02 13:32:27.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:28.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:28.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:28 np0005466030 nova_compute[230518]: 2025-10-02 13:32:28.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:30.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:32.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:32 np0005466030 nova_compute[230518]: 2025-10-02 13:32:32.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 e430: 3 total, 3 up, 3 in
Oct  2 09:32:33 np0005466030 nova_compute[230518]: 2025-10-02 13:32:33.646 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411938.6447265, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  2 09:32:33 np0005466030 nova_compute[230518]: 2025-10-02 13:32:33.647 2 INFO nova.compute.manager [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Stopped (Lifecycle Event)#033[00m
Oct  2 09:32:33 np0005466030 nova_compute[230518]: 2025-10-02 13:32:33.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:33 np0005466030 nova_compute[230518]: 2025-10-02 13:32:33.697 2 DEBUG nova.compute.manager [None req-5fab88c1-4008-4bfd-8fac-4a7152f847dd - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  2 09:32:33 np0005466030 podman[320809]: 2025-10-02 13:32:33.803708978 +0000 UTC m=+0.048184822 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  2 09:32:33 np0005466030 podman[320808]: 2025-10-02 13:32:33.859761605 +0000 UTC m=+0.105858109 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  2 09:32:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:36.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:37 np0005466030 nova_compute[230518]: 2025-10-02 13:32:37.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:38.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:38.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:38 np0005466030 nova_compute[230518]: 2025-10-02 13:32:38.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:40.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:40.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:41 np0005466030 nova_compute[230518]: 2025-10-02 13:32:41.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:41 np0005466030 nova_compute[230518]: 2025-10-02 13:32:41.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:32:41 np0005466030 nova_compute[230518]: 2025-10-02 13:32:41.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:32:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:42.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:42 np0005466030 nova_compute[230518]: 2025-10-02 13:32:42.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:43 np0005466030 nova_compute[230518]: 2025-10-02 13:32:43.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:44.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:44 np0005466030 nova_compute[230518]: 2025-10-02 13:32:44.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:44 np0005466030 nova_compute[230518]: 2025-10-02 13:32:44.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:45 np0005466030 nova_compute[230518]: 2025-10-02 13:32:45.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:45 np0005466030 podman[320855]: 2025-10-02 13:32:45.798548603 +0000 UTC m=+0.053098705 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  2 09:32:45 np0005466030 podman[320856]: 2025-10-02 13:32:45.800639439 +0000 UTC m=+0.053743335 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:32:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:46.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:32:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:32:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:32:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3673655604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.526 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.690 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.691 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4177MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.691 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.692 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.824 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.824 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:32:47 np0005466030 nova_compute[230518]: 2025-10-02 13:32:47.874 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:32:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:48.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:32:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3453873877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:32:48 np0005466030 nova_compute[230518]: 2025-10-02 13:32:48.316 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:32:48 np0005466030 nova_compute[230518]: 2025-10-02 13:32:48.321 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:32:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:48 np0005466030 nova_compute[230518]: 2025-10-02 13:32:48.381 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:32:48 np0005466030 nova_compute[230518]: 2025-10-02 13:32:48.494 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:32:48 np0005466030 nova_compute[230518]: 2025-10-02 13:32:48.495 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:32:48 np0005466030 nova_compute[230518]: 2025-10-02 13:32:48.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:49 np0005466030 nova_compute[230518]: 2025-10-02 13:32:49.495 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:49 np0005466030 nova_compute[230518]: 2025-10-02 13:32:49.496 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:32:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:50.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:52.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:52 np0005466030 nova_compute[230518]: 2025-10-02 13:32:52.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:53 np0005466030 nova_compute[230518]: 2025-10-02 13:32:53.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:53 np0005466030 nova_compute[230518]: 2025-10-02 13:32:53.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:32:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:54.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:32:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:54.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:32:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:56.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:56 np0005466030 nova_compute[230518]: 2025-10-02 13:32:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:56.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:57 np0005466030 nova_compute[230518]: 2025-10-02 13:32:57.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:57 np0005466030 nova_compute[230518]: 2025-10-02 13:32:57.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:58.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:58 np0005466030 nova_compute[230518]: 2025-10-02 13:32:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:32:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:32:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:32:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:32:58 np0005466030 nova_compute[230518]: 2025-10-02 13:32:58.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:32:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:00.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:00.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:01 np0005466030 nova_compute[230518]: 2025-10-02 13:33:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:02 np0005466030 nova_compute[230518]: 2025-10-02 13:33:02.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:02.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:02.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:02 np0005466030 nova_compute[230518]: 2025-10-02 13:33:02.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:03 np0005466030 nova_compute[230518]: 2025-10-02 13:33:03.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:04 np0005466030 nova_compute[230518]: 2025-10-02 13:33:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:04 np0005466030 nova_compute[230518]: 2025-10-02 13:33:04.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:33:04 np0005466030 nova_compute[230518]: 2025-10-02 13:33:04.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:33:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:04.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:04 np0005466030 nova_compute[230518]: 2025-10-02 13:33:04.093 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:33:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:04.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:04 np0005466030 podman[320941]: 2025-10-02 13:33:04.79802059 +0000 UTC m=+0.049619376 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:33:04 np0005466030 podman[320940]: 2025-10-02 13:33:04.833258654 +0000 UTC m=+0.086696029 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  2 09:33:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:06.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:07 np0005466030 nova_compute[230518]: 2025-10-02 13:33:07.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:08.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:08.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:08 np0005466030 nova_compute[230518]: 2025-10-02 13:33:08.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:10.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:12.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:12.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:12 np0005466030 nova_compute[230518]: 2025-10-02 13:33:12.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:13 np0005466030 nova_compute[230518]: 2025-10-02 13:33:13.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:14.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:16.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:16 np0005466030 podman[320984]: 2025-10-02 13:33:16.824187107 +0000 UTC m=+0.069103797 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:33:16 np0005466030 podman[320983]: 2025-10-02 13:33:16.843236025 +0000 UTC m=+0.092191182 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:33:17 np0005466030 nova_compute[230518]: 2025-10-02 13:33:17.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:18.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:18.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:18 np0005466030 nova_compute[230518]: 2025-10-02 13:33:18.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:19 np0005466030 ovn_controller[129257]: 2025-10-02T13:33:19Z|00912|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct  2 09:33:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:20.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:33:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:33:20 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:33:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:20.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:22 np0005466030 nova_compute[230518]: 2025-10-02 13:33:22.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:23 np0005466030 nova_compute[230518]: 2025-10-02 13:33:23.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:33:25.994 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:33:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:33:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:33:27 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:33:27 np0005466030 nova_compute[230518]: 2025-10-02 13:33:27.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:28 np0005466030 nova_compute[230518]: 2025-10-02 13:33:28.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:32.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:32 np0005466030 nova_compute[230518]: 2025-10-02 13:33:32.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:33 np0005466030 nova_compute[230518]: 2025-10-02 13:33:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:34.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:34.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:35 np0005466030 podman[321202]: 2025-10-02 13:33:35.806145498 +0000 UTC m=+0.044267149 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  2 09:33:35 np0005466030 podman[321201]: 2025-10-02 13:33:35.835773717 +0000 UTC m=+0.084921244 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:33:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:37 np0005466030 nova_compute[230518]: 2025-10-02 13:33:37.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:38.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:38 np0005466030 nova_compute[230518]: 2025-10-02 13:33:38.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:40.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:42.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:42.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:42 np0005466030 nova_compute[230518]: 2025-10-02 13:33:42.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:43 np0005466030 nova_compute[230518]: 2025-10-02 13:33:43.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:44.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:44.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:45 np0005466030 nova_compute[230518]: 2025-10-02 13:33:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:46.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:46.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:47 np0005466030 nova_compute[230518]: 2025-10-02 13:33:47.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:47 np0005466030 podman[321245]: 2025-10-02 13:33:47.791066719 +0000 UTC m=+0.048083088 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:33:47 np0005466030 podman[321246]: 2025-10-02 13:33:47.797054167 +0000 UTC m=+0.050878457 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:33:48 np0005466030 nova_compute[230518]: 2025-10-02 13:33:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:48 np0005466030 nova_compute[230518]: 2025-10-02 13:33:48.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:33:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:48.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:48 np0005466030 nova_compute[230518]: 2025-10-02 13:33:48.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:33:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:33:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3420975852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.556 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.748 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.750 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4175MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:33:49 np0005466030 nova_compute[230518]: 2025-10-02 13:33:49.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:33:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:50 np0005466030 nova_compute[230518]: 2025-10-02 13:33:50.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:33:50 np0005466030 nova_compute[230518]: 2025-10-02 13:33:50.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:33:50 np0005466030 nova_compute[230518]: 2025-10-02 13:33:50.784 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:33:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:33:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/796460566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:33:51 np0005466030 nova_compute[230518]: 2025-10-02 13:33:51.237 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:33:51 np0005466030 nova_compute[230518]: 2025-10-02 13:33:51.243 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:33:51 np0005466030 nova_compute[230518]: 2025-10-02 13:33:51.269 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:33:51 np0005466030 nova_compute[230518]: 2025-10-02 13:33:51.271 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:33:51 np0005466030 nova_compute[230518]: 2025-10-02 13:33:51.272 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:33:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:52 np0005466030 nova_compute[230518]: 2025-10-02 13:33:52.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:53 np0005466030 nova_compute[230518]: 2025-10-02 13:33:53.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:54.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:33:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:56 np0005466030 nova_compute[230518]: 2025-10-02 13:33:56.272 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:56.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:57 np0005466030 nova_compute[230518]: 2025-10-02 13:33:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:57 np0005466030 nova_compute[230518]: 2025-10-02 13:33:57.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:58 np0005466030 nova_compute[230518]: 2025-10-02 13:33:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:33:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:58.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.334977) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038335011, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1978, "num_deletes": 252, "total_data_size": 4780865, "memory_usage": 4836856, "flush_reason": "Manual Compaction"}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038344773, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1865443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86975, "largest_seqno": 88948, "table_properties": {"data_size": 1859405, "index_size": 3048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16082, "raw_average_key_size": 21, "raw_value_size": 1845919, "raw_average_value_size": 2422, "num_data_blocks": 136, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411862, "oldest_key_time": 1759411862, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 9843 microseconds, and 4910 cpu microseconds.
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.344819) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1865443 bytes OK
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.344836) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346122) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346134) EVENT_LOG_v1 {"time_micros": 1759412038346131, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 4771927, prev total WAL file size 4771927, number of live WAL files 2.
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.347038) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303038' seq:72057594037927935, type:22 .. '6D6772737461740033323630' seq:0, type:0; will stop at (end)
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1821KB)], [180(12MB)]
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038347065, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 15061277, "oldest_snapshot_seqno": -1}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 10883 keys, 12449823 bytes, temperature: kUnknown
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038405154, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 12449823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12381988, "index_size": 39498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27269, "raw_key_size": 286752, "raw_average_key_size": 26, "raw_value_size": 12194370, "raw_average_value_size": 1120, "num_data_blocks": 1496, "num_entries": 10883, "num_filter_entries": 10883, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.405445) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12449823 bytes
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.406561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.0 rd, 214.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(14.7) write-amplify(6.7) OK, records in: 11324, records dropped: 441 output_compression: NoCompression
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.406715) EVENT_LOG_v1 {"time_micros": 1759412038406699, "job": 116, "event": "compaction_finished", "compaction_time_micros": 58153, "compaction_time_cpu_micros": 27565, "output_level": 6, "num_output_files": 1, "total_output_size": 12449823, "num_input_records": 11324, "num_output_records": 10883, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038407289, "job": 116, "event": "table_file_deletion", "file_number": 182}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038410119, "job": 116, "event": "table_file_deletion", "file_number": 180}
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.347006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:33:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:33:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:33:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:33:58 np0005466030 nova_compute[230518]: 2025-10-02 13:33:58.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:33:59 np0005466030 nova_compute[230518]: 2025-10-02 13:33:59.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:33:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:01 np0005466030 nova_compute[230518]: 2025-10-02 13:34:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:02 np0005466030 systemd-logind[795]: New session 62 of user zuul.
Oct  2 09:34:02 np0005466030 systemd[1]: Started Session 62 of User zuul.
Oct  2 09:34:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:34:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:34:02 np0005466030 nova_compute[230518]: 2025-10-02 13:34:02.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:03 np0005466030 nova_compute[230518]: 2025-10-02 13:34:03.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:04.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:06 np0005466030 nova_compute[230518]: 2025-10-02 13:34:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:06 np0005466030 nova_compute[230518]: 2025-10-02 13:34:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:34:06 np0005466030 nova_compute[230518]: 2025-10-02 13:34:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:34:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:06.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:06 np0005466030 nova_compute[230518]: 2025-10-02 13:34:06.146 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:34:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:06 np0005466030 podman[321558]: 2025-10-02 13:34:06.866726665 +0000 UTC m=+0.104392603 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:34:06 np0005466030 podman[321557]: 2025-10-02 13:34:06.886710322 +0000 UTC m=+0.124375730 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:34:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:34:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3064877731' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:34:07 np0005466030 nova_compute[230518]: 2025-10-02 13:34:07.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:08.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:08.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:08 np0005466030 nova_compute[230518]: 2025-10-02 13:34:08.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:10.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:10 np0005466030 ovs-vsctl[321660]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 09:34:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:10.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:11 np0005466030 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 09:34:11 np0005466030 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 09:34:11 np0005466030 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:34:11 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: cache status {prefix=cache status} (starting...)
Oct  2 09:34:11 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:11 np0005466030 lvm[321974]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 09:34:11 np0005466030 lvm[321974]: VG ceph_vg0 finished
Oct  2 09:34:11 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: client ls {prefix=client ls} (starting...)
Oct  2 09:34:11 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:12 np0005466030 kernel: block loop3: the capability attribute has been deprecated.
Oct  2 09:34:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:12.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:12.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:12 np0005466030 nova_compute[230518]: 2025-10-02 13:34:12.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 09:34:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/535649708' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 09:34:12 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1976143621' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/42129484' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:13 np0005466030 nova_compute[230518]: 2025-10-02 13:34:13.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: ops {prefix=ops} (starting...)
Oct  2 09:34:13 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2805152507' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 09:34:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2149402501' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 09:34:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:34:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2900548446' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:34:14 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: session ls {prefix=session ls} (starting...)
Oct  2 09:34:14 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:34:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:14 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: status {prefix=status} (starting...)
Oct  2 09:34:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:34:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1477980080' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:34:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/731540033' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1353089157' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/966138210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:34:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2678827406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/888571476' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 09:34:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:16.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3713185548' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1298687505' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:34:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:16.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 09:34:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1460685059' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 09:34:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:34:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3363334927' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:34:17 np0005466030 nova_compute[230518]: 2025-10-02 13:34:17.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:34:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1208016185' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ead1000/0x0/0x1bfc00000, data 0x51ebd58/0x53c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440983552 unmapped: 48168960 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443801600 unmapped: 45350912 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943f784c00 session 0x5594377af4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x5594351d4d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441540608 unmapped: 47611904 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559437e52f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19eb4e000/0x0/0x1bfc00000, data 0x5b0ed58/0x5ce8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441556992 unmapped: 47595520 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441556992 unmapped: 47595520 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4738523 data_alloc: 234881024 data_used: 24154112
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441556992 unmapped: 47595520 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440639488 unmapped: 48513024 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440639488 unmapped: 48513024 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f7f8000/0x0/0x1bfc00000, data 0x4e6cd58/0x5046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440639488 unmapped: 48513024 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f7f8000/0x0/0x1bfc00000, data 0x4e6cd58/0x5046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.279331207s of 11.395094872s, submitted: 171
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559436e3a800 session 0x559435f43e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x559436c4d2c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440647680 unmapped: 48504832 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735927 data_alloc: 234881024 data_used: 24154112
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440647680 unmapped: 48504832 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559437e57860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 58384384 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x1a0900000/0x0/0x1bfc00000, data 0x3ae5ce6/0x3cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 58384384 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 58384384 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 58376192 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504800 data_alloc: 234881024 data_used: 15077376
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 58376192 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 58376192 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x1a0900000/0x0/0x1bfc00000, data 0x3ae5ce6/0x3cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x559436f7fe00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x5594370f4d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943f784c00 session 0x559437cb0000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430784512 unmapped: 58368000 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559434c623c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x5594351d5860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x559437e56f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x559437c42960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594344f9c00 session 0x559437d09680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x5594352865a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x559437bcb860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f682000/0x0/0x1bfc00000, data 0x4fe2d58/0x51bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4726513 data_alloc: 251658240 data_used: 31571968
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.135056496s of 11.742373466s, submitted: 75
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x559434c65e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae63800 session 0x5594370f4d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559437e57860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x559435f43e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 55353344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x559437e52f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x5594351d4d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f640000/0x0/0x1bfc00000, data 0x5024d58/0x51fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 55353344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ecec000/0x0/0x1bfc00000, data 0x5978d58/0x5b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 55353344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4796273 data_alloc: 251658240 data_used: 31580160
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ecec000/0x0/0x1bfc00000, data 0x5978d58/0x5b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820961 data_alloc: 251658240 data_used: 35008512
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ecec000/0x0/0x1bfc00000, data 0x5978d58/0x5b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 handle_osd_map epochs [358,358], i have 358, src has [1,358]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441737216 unmapped: 55394304 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.272820473s of 10.002835274s, submitted: 38
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559436e3ac00 session 0x559436ddef00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559434792c00 session 0x559436fa3c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a0084000/0x0/0x1bfc00000, data 0x45dfa05/0x47ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x5594350c8800 session 0x559437c57860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4676383 data_alloc: 234881024 data_used: 22024192
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x55943725c400 session 0x559437c56780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x55943ae62c00 session 0x559436f7f4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559437502400 session 0x559437cb03c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559434792c00 session 0x559437cb01e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x5594350c8800 session 0x559434ce1e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432799744 unmapped: 64331776 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943725c400 session 0x559435f60780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432807936 unmapped: 64323584 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943ae62c00 session 0x559435153a40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432824320 unmapped: 64307200 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432824320 unmapped: 64307200 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432824320 unmapped: 64307200 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4701279 data_alloc: 234881024 data_used: 22065152
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61300736 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798679 data_alloc: 251658240 data_used: 34697216
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.026805878s of 16.320375443s, submitted: 68
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3dc00 session 0x559435f643c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435d8d0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436ddf0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943c02b400 session 0x559436eaaf00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f7ef00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4797472 data_alloc: 251658240 data_used: 34697216
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4807712 data_alloc: 251658240 data_used: 36016128
Oct  2 09:34:17 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.196416855s of 10.219329834s, submitted: 6
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4808004 data_alloc: 251658240 data_used: 36020224
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fc43000/0x0/0x1bfc00000, data 0x4a1c59a/0x4bfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fb71000/0x0/0x1bfc00000, data 0x4aee59a/0x4ccd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835894 data_alloc: 251658240 data_used: 36028416
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438566912 unmapped: 58564608 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559436c4a5a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.843790054s of 10.033929825s, submitted: 64
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x559436dde5a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 58515456 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438657024 unmapped: 58474496 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f8f0000/0x0/0x1bfc00000, data 0x4d6f59a/0x4f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439803904 unmapped: 57327616 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4866724 data_alloc: 251658240 data_used: 37351424
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f8c5000/0x0/0x1bfc00000, data 0x4d9a59a/0x4f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4866856 data_alloc: 251658240 data_used: 37351424
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441147392 unmapped: 55984128 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441540608 unmapped: 55590912 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.108606339s of 10.016971588s, submitted: 41
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4890798 data_alloc: 251658240 data_used: 37740544
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f60b000/0x0/0x1bfc00000, data 0x504c59a/0x522b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559436f6b0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f6af00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438550528 unmapped: 58580992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436eaaf00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438591488 unmapped: 58540032 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x5594370f4d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559434c65e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437bcb860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x5594352865a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559437d09680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x559434c62960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x559436ddfe00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439107584 unmapped: 58023936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f7f2c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436c51e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439107584 unmapped: 58023936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435f650e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4796781 data_alloc: 234881024 data_used: 27525120
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559435ddcd20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f9ca000/0x0/0x1bfc00000, data 0x4c98557/0x4e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436eaa000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.470400810s of 10.955029488s, submitted: 96
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943ae62c00 session 0x559434c6f680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb5800 session 0x559436c50f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f9ca000/0x0/0x1bfc00000, data 0x4c98557/0x4e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [0,0,0,1])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 71843840 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4542801 data_alloc: 234881024 data_used: 13787136
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436dde780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 71843840 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 71843840 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425320448 unmapped: 71811072 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0eed000/0x0/0x1bfc00000, data 0x37774e5/0x3951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [0,0,1,0,2])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425385984 unmapped: 71745536 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559434ce0780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 71532544 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540722 data_alloc: 234881024 data_used: 13676544
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71516160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ec9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71516160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71516160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425369600 unmapped: 71761920 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4602746 data_alloc: 234881024 data_used: 22228992
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.654204369s of 11.906198502s, submitted: 296
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4603274 data_alloc: 234881024 data_used: 22228992
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 71868416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 71835648 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 428965888 unmapped: 68165632 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4656348 data_alloc: 234881024 data_used: 23576576
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.189975739s of 10.397528648s, submitted: 60
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a040d000/0x0/0x1bfc00000, data 0x3e3f4e5/0x4019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a040d000/0x0/0x1bfc00000, data 0x3e3f4e5/0x4019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4663476 data_alloc: 234881024 data_used: 23478272
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a040d000/0x0/0x1bfc00000, data 0x3e3f4e5/0x4019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429006848 unmapped: 68124672 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429146112 unmapped: 67985408 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437c43e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x5594377af0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 66936832 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435f661e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474744 data_alloc: 234881024 data_used: 13676544
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1349000/0x0/0x1bfc00000, data 0x2f0b4e5/0x30e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426762240 unmapped: 70369280 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559434c63e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943725c400 session 0x559436f6a960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426762240 unmapped: 70369280 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1349000/0x0/0x1bfc00000, data 0x2f0b4e5/0x30e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474744 data_alloc: 234881024 data_used: 13676544
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1349000/0x0/0x1bfc00000, data 0x2f0b4e5/0x30e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.693103790s of 14.924519539s, submitted: 82
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559434c632c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4398502 data_alloc: 234881024 data_used: 11030528
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1b06000/0x0/0x1bfc00000, data 0x274e4c2/0x2927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x5594351d50e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414822 data_alloc: 234881024 data_used: 16404480
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559437c57a40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1b06000/0x0/0x1bfc00000, data 0x274e4c2/0x2927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1b06000/0x0/0x1bfc00000, data 0x274e4c2/0x2927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.645080566s of 11.849708557s, submitted: 22
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435de03c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943ae62c00 session 0x5594377afa40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4421903 data_alloc: 234881024 data_used: 16404480
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437104f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559435f65860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x5594351d4d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x5594351d4f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x5594370f52c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb5800 session 0x559435ddd680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427032576 unmapped: 70098944 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1044000/0x0/0x1bfc00000, data 0x32114c2/0x33ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x5594370f4b40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429268992 unmapped: 67862528 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559437c43e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427106304 unmapped: 70025216 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a074c000/0x0/0x1bfc00000, data 0x3b094c2/0x3ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427106304 unmapped: 70025216 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4572484 data_alloc: 234881024 data_used: 16404480
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427114496 unmapped: 70017024 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427114496 unmapped: 70017024 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427114496 unmapped: 70017024 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.230933189s of 10.956824303s, submitted: 150
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559434c6f680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 69844992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 69844992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0728000/0x0/0x1bfc00000, data 0x3b2d4c2/0x3d06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579290 data_alloc: 234881024 data_used: 16379904
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0728000/0x0/0x1bfc00000, data 0x3b2d4c2/0x3d06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0728000/0x0/0x1bfc00000, data 0x3b2d4c2/0x3d06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943bc35400 session 0x5594377af2c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 67649536 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4646650 data_alloc: 234881024 data_used: 25628672
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f7fe00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 67649536 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559435de12c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429490176 unmapped: 67641344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559437d09c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65241088 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436e3bc00 session 0x559437e563c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436bd3000 session 0x559437b0a1e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437d081e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.774092674s of 10.054019928s, submitted: 30
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559434c62960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559434c754a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 66019328 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a017e000/0x0/0x1bfc00000, data 0x40d54f5/0x42b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 434233344 unmapped: 62898176 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4775360 data_alloc: 251658240 data_used: 34942976
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 434233344 unmapped: 62898176 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a017e000/0x0/0x1bfc00000, data 0x40d54f5/0x42b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fa6f000/0x0/0x1bfc00000, data 0x47e44f5/0x49bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436617216 unmapped: 60514304 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fa0d000/0x0/0x1bfc00000, data 0x48464f5/0x4a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 59629568 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 59629568 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437518336 unmapped: 59613184 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4842696 data_alloc: 251658240 data_used: 35905536
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437526528 unmapped: 59604992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594374ad800 session 0x559434ce0f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438575104 unmapped: 58556416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f985000/0x0/0x1bfc00000, data 0x48ce4f5/0x4aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438575104 unmapped: 58556416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943d784400 session 0x559434c8c780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438575104 unmapped: 58556416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559434c8d4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.614707947s of 11.423568726s, submitted: 71
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440131584 unmapped: 56999936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4888202 data_alloc: 251658240 data_used: 35979264
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440131584 unmapped: 56999936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440139776 unmapped: 56991744 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559437e56b40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439468032 unmapped: 57663488 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f49f000/0x0/0x1bfc00000, data 0x4db3505/0x4f8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436f95400 session 0x559436c4dc20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437503400 session 0x559437d090e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436cb1800 session 0x559437e52d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439468032 unmapped: 57663488 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f6ab40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439476224 unmapped: 57655296 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559437bca780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436f95400 session 0x559437cb0f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4947161 data_alloc: 251658240 data_used: 37863424
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437503400 session 0x559437e57860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19efa1000/0x0/0x1bfc00000, data 0x52b1505/0x548d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594358bec00 session 0x559436eab0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x5594351d4960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 54108160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435ddcd20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443072512 unmapped: 54059008 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943c02b400 session 0x5594370ed0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19efa1000/0x0/0x1bfc00000, data 0x52b1505/0x548d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443080704 unmapped: 54050816 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559435914780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443088896 unmapped: 54042624 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559437503400 session 0x559437bcb4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559436f95400 session 0x559435e4f0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.119692802s of 10.014904976s, submitted: 81
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443146240 unmapped: 53985280 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559434fee000 session 0x5594370ed4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559434792c00 session 0x559435ddcd20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559436c3e000 session 0x559437e57860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5113152 data_alloc: 251658240 data_used: 44658688
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 heartbeat osd_stat(store_statfs(0x19ef98000/0x0/0x1bfc00000, data 0x52b915e/0x5496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,7])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x55943c02b400 session 0x559437b0a780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 42950656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441835520 unmapped: 63700992 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559437503400 session 0x559437cb0f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441843712 unmapped: 63692800 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559434792c00 session 0x559437c56d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441851904 unmapped: 63684608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 361 ms_handle_reset con 0x559434fee000 session 0x559435f61a40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441876480 unmapped: 63660032 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184360 data_alloc: 268435456 data_used: 48173056
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443228160 unmapped: 62308352 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 361 heartbeat osd_stat(store_statfs(0x19d2c5000/0x0/0x1bfc00000, data 0x6f89e2e/0x7169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 handle_osd_map epochs [362,362], i have 362, src has [1,362]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444628992 unmapped: 60907520 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x55943c02b400 session 0x559434c754a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444628992 unmapped: 60907520 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x5594450edc00 session 0x559434c6f680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 heartbeat osd_stat(store_statfs(0x19d296000/0x0/0x1bfc00000, data 0x6fb5b05/0x7197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444653568 unmapped: 60882944 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x559434792c00 session 0x5594370f4b40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x559434fee000 session 0x5594351d4f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.504015923s of 10.231104851s, submitted: 146
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445382656 unmapped: 60153856 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5277136 data_alloc: 268435456 data_used: 50122752
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445390848 unmapped: 60145664 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 heartbeat osd_stat(store_statfs(0x19d0ed000/0x0/0x1bfc00000, data 0x7159b05/0x733b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445456384 unmapped: 60080128 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445489152 unmapped: 60047360 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x55943c02b400 session 0x559436c4a3c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559436ddf4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559435a4f800 session 0x559437bca1e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445915136 unmapped: 59621376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d09e000/0x0/0x1bfc00000, data 0x71aa644/0x738d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,1])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559435a4f800 session 0x5594351d4960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19dc27000/0x0/0x1bfc00000, data 0x5f2d644/0x6110000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444833792 unmapped: 60702720 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x5594352865a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559435e4fa40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x55943c02b400 session 0x559436f7e5a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559434ff12c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559436ec70e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114633 data_alloc: 251658240 data_used: 32268288
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559437503400 session 0x559434c8d0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444841984 unmapped: 60694528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d341000/0x0/0x1bfc00000, data 0x6813644/0x69f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d341000/0x0/0x1bfc00000, data 0x6813644/0x69f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,0,0,1,2])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446636032 unmapped: 58900480 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446242816 unmapped: 59293696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559435152f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 447578112 unmapped: 57958400 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559437b0ba40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559435a4f800 session 0x559437e532c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.331444263s of 10.065950394s, submitted: 150
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 447578112 unmapped: 57958400 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5149772 data_alloc: 251658240 data_used: 32313344
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594350c8800 session 0x559437e561e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594374ad800 session 0x559436ec7e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559437bcaf00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446119936 unmapped: 59416576 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d4cb000/0x0/0x1bfc00000, data 0x6d79654/0x6f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,5])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446136320 unmapped: 59400192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559435142000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559435f421e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446144512 unmapped: 59392000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559437b0b860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559435286000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559437d092c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594350c8800 session 0x559435142960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594374ad800 session 0x559437d08b40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594374ad800 session 0x559437d08000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559436eaaf00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198404 data_alloc: 268435456 data_used: 58335232
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559437e534a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19da80000/0x0/0x1bfc00000, data 0x67cb644/0x69ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461201408 unmapped: 44335104 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461201408 unmapped: 44335104 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 46604288 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5189902 data_alloc: 268435456 data_used: 58331136
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.957287312s of 10.495686531s, submitted: 80
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 363 handle_osd_map epochs [364,364], i have 364, src has [1,364]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 455180288 unmapped: 50356224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 364 ms_handle_reset con 0x55943c02b400 session 0x559435f42000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 455180288 unmapped: 50356224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 364 ms_handle_reset con 0x559436f69800 session 0x559436f6af00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 364 ms_handle_reset con 0x559434792c00 session 0x559437d09e00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 455196672 unmapped: 50339840 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 364 heartbeat osd_stat(store_statfs(0x19f05f000/0x0/0x1bfc00000, data 0x51eb28f/0x53ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457179136 unmapped: 48357376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 364 heartbeat osd_stat(store_statfs(0x19ebca000/0x0/0x1bfc00000, data 0x568128f/0x5864000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457400320 unmapped: 48136192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5057969 data_alloc: 251658240 data_used: 41594880
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457523200 unmapped: 48013312 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457539584 unmapped: 47996928 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457547776 unmapped: 47988736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb5000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457547776 unmapped: 47988736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x559435d8c000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457547776 unmapped: 47988736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5068663 data_alloc: 251658240 data_used: 41820160
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb5000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457629696 unmapped: 47906816 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.571898460s of 11.282312393s, submitted: 111
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559437b0b860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457703424 unmapped: 47833088 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457728000 unmapped: 47808512 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb6000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb6000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5083571 data_alloc: 251658240 data_used: 43376640
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02b400 session 0x559436fa3c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559437eacc00 session 0x559434c630e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559437c565a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x559435143a40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559437e53680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02b400 session 0x559435f65c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02a800 session 0x559436fa2d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559436f0e960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x5594351d4d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 46964736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559435152d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02b400 session 0x559437104960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c9c00 session 0x5594351d4780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559437e57c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x559436f6ad20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 46891008 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c8800 session 0x559434c74960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594450edc00 session 0x559437d083c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5203783 data_alloc: 251658240 data_used: 44023808
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19de84000/0x0/0x1bfc00000, data 0x63c5dce/0x65aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,1])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559436eab0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 47431680 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 47431680 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x5594351530e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 47431680 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e7ba000/0x0/0x1bfc00000, data 0x5a82dce/0x5c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 47415296 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559435ddc1e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.733769417s of 13.131544113s, submitted: 79
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 47415296 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5074719 data_alloc: 234881024 data_used: 36212736
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x5594351434a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c8800 session 0x559437c423c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594450edc00 session 0x559436c4d860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 47054848 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458489856 unmapped: 47046656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457129984 unmapped: 48406528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e79d000/0x0/0x1bfc00000, data 0x5aacdce/0x5c91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 460357632 unmapped: 45178880 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea1c00 session 0x559437d085a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559437e56f00
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137212 data_alloc: 251658240 data_used: 47284224
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559435142b40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436f94400 session 0x5594370f54a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594372bbc00 session 0x559437cb03c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 43900928 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e79c000/0x0/0x1bfc00000, data 0x5aacdde/0x5c92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x5594377ae3c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461783040 unmapped: 43753472 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e79c000/0x0/0x1bfc00000, data 0x5aacdde/0x5c92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559437cb0000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5156263 data_alloc: 251658240 data_used: 47284224
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e644000/0x0/0x1bfc00000, data 0x5c04dde/0x5dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.920718193s of 11.131993294s, submitted: 45
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 43433984 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea1c00 session 0x559436f7f4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436f94400 session 0x5594377afc20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559434c8c780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559436c4c780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559435d8d0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19dbd6000/0x0/0x1bfc00000, data 0x6671e07/0x6858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462110720 unmapped: 43425792 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462110720 unmapped: 43425792 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 35995648 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470319104 unmapped: 35217408 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5365616 data_alloc: 251658240 data_used: 49094656
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559437bcb860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470827008 unmapped: 34709504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19bd3f000/0x0/0x1bfc00000, data 0x735fe40/0x7546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 34586624 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475594752 unmapped: 29941760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478511104 unmapped: 27025408 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594385cd400 session 0x559435e4f0e0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 27648000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5465331 data_alloc: 268435456 data_used: 59858944
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 27648000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19bcfb000/0x0/0x1bfc00000, data 0x73ace40/0x7593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.010444641s of 10.779306412s, submitted: 210
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ac000 session 0x5594351d5860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 26820608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 26820608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 26820608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19bcd6000/0x0/0x1bfc00000, data 0x73d1e40/0x75b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478756864 unmapped: 26779648 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5478811 data_alloc: 268435456 data_used: 61161472
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480854016 unmapped: 24682496 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ab36000/0x0/0x1bfc00000, data 0x73d1e40/0x75b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480886784 unmapped: 24649728 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480894976 unmapped: 24641536 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ab36000/0x0/0x1bfc00000, data 0x73d1e40/0x75b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 20389888 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485269504 unmapped: 20267008 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5552239 data_alloc: 268435456 data_used: 62242816
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559437e53c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559434c6f4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594385cd400 session 0x559435153a40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559436c4a000
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374acc00 session 0x559434ff12c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486137856 unmapped: 19398656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559437c42b40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559437b0ba40
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594385cd400 session 0x559435f654a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559436ddf4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 19316736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.066761017s of 10.603158951s, submitted: 163
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488226816 unmapped: 17309696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488316928 unmapped: 17219584 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19985b000/0x0/0x1bfc00000, data 0x86aaeb2/0x8893000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5652950 data_alloc: 268435456 data_used: 62857216
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559437438800 session 0x559437c56d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x5594370f4780
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c8800 session 0x559437e532c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19984e000/0x0/0x1bfc00000, data 0x86b7eb2/0x88a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489111552 unmapped: 16424960 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19984e000/0x0/0x1bfc00000, data 0x86b7eb2/0x88a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5682278 data_alloc: 268435456 data_used: 67149824
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492879872 unmapped: 12656640 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492929024 unmapped: 12607488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559437438800 session 0x559435153860
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.824018478s of 10.099150658s, submitted: 57
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 365 handle_osd_map epochs [366,366], i have 366, src has [1,366]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x5594385cd400 session 0x559437b0b4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x55943b0e9000 session 0x559434c8c5a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559434792c00 session 0x559437cb1680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492986368 unmapped: 12550144 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x5594350c8800 session 0x559434c74d20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492986368 unmapped: 12550144 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559436ea1400 session 0x559435f65c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559437438800 session 0x559434c6f680
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x55943b0e8800 session 0x559434c8d4a0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492986368 unmapped: 12550144 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5698584 data_alloc: 268435456 data_used: 68788224
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 heartbeat osd_stat(store_statfs(0x199849000/0x0/0x1bfc00000, data 0x86bab0b/0x88a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494100480 unmapped: 11436032 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494329856 unmapped: 11206656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494329856 unmapped: 11206656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494329856 unmapped: 11206656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494338048 unmapped: 11198464 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5710188 data_alloc: 268435456 data_used: 68947968
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 heartbeat osd_stat(store_statfs(0x199848000/0x0/0x1bfc00000, data 0x86bbb0b/0x88a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 11190272 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 495542272 unmapped: 9994240 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.477729797s of 10.070092201s, submitted: 19
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559436ea1400 session 0x559434ff12c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 495665152 unmapped: 9871360 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493993984 unmapped: 11542528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x559437438800 session 0x559436fa2960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x5594350c8800 session 0x5594377aed20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493789184 unmapped: 11747328 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5784008 data_alloc: 268435456 data_used: 69672960
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 heartbeat osd_stat(store_statfs(0x198f41000/0x0/0x1bfc00000, data 0x8fc17b8/0x91ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493862912 unmapped: 11673600 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494133248 unmapped: 11403264 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 heartbeat osd_stat(store_statfs(0x198f33000/0x0/0x1bfc00000, data 0x8fce7b8/0x91b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494264320 unmapped: 11272192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494264320 unmapped: 11272192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 heartbeat osd_stat(store_statfs(0x198f33000/0x0/0x1bfc00000, data 0x8fce7b8/0x91b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494264320 unmapped: 11272192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5801066 data_alloc: 268435456 data_used: 70463488
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x559436c5dc00 session 0x559437c43c20
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x559436ea0400 session 0x559437c572c0
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494272512 unmapped: 11264000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: osd.0 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494092288 unmapped: 11444224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.887620449s of 10.231764793s, submitted: 150
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494198784 unmapped: 11337728 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:17 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494215168 unmapped: 11321344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x5594350c8800 session 0x559435f61a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559436ea0400 session 0x559436c4a5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494362624 unmapped: 11173888 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x199f62000/0x0/0x1bfc00000, data 0x7fa12a8/0x818c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5625777 data_alloc: 268435456 data_used: 63156224
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494362624 unmapped: 11173888 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494370816 unmapped: 11165696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x199f62000/0x0/0x1bfc00000, data 0x7fa12a8/0x818c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494370816 unmapped: 11165696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x5594450edc00 session 0x559436f0ed20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559434fee000 session 0x559436ddf680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483721216 unmapped: 21815296 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559437438800 session 0x559436f7e5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484777984 unmapped: 20758528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5446649 data_alloc: 251658240 data_used: 53604352
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484777984 unmapped: 20758528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484777984 unmapped: 20758528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.747402191s of 10.040046692s, submitted: 73
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a904000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559436ea1c00 session 0x5594370f45a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559436f94400 session 0x559435de05a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5450433 data_alloc: 251658240 data_used: 53600256
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a902000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5460657 data_alloc: 268435456 data_used: 54726656
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a902000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.029449463s of 11.190081596s, submitted: 33
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a902000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,3])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559434fee000 session 0x559437e53c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 handle_osd_map epochs [368,369], i have 369, src has [1,369]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x5594350c8800 session 0x559437cb1680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5465301 data_alloc: 268435456 data_used: 54730752
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x559436ea0400 session 0x559437cb0f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x559434fee000 session 0x5594370f4d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x5594350c8800 session 0x559437e57c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fb000/0x0/0x1bfc00000, data 0x71f5f01/0x73e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x559436ea0400 session 0x5594370f4f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fb000/0x0/0x1bfc00000, data 0x71f5f01/0x73e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5466581 data_alloc: 268435456 data_used: 54878208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fa000/0x0/0x1bfc00000, data 0x71f6f01/0x73e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fa000/0x0/0x1bfc00000, data 0x71f6f01/0x73e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5466833 data_alloc: 268435456 data_used: 54878208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 20709376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x5594450edc00 session 0x559434c63a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.163173676s of 13.797877312s, submitted: 11
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 20709376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 20709376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 369 handle_osd_map epochs [370,370], i have 370, src has [1,370]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594385cd400 session 0x559437c57680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484876288 unmapped: 20660224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 heartbeat osd_stat(store_statfs(0x19a8f2000/0x0/0x1bfc00000, data 0x71fdbae/0x73eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5472195 data_alloc: 268435456 data_used: 55263232
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 heartbeat osd_stat(store_statfs(0x19a8f2000/0x0/0x1bfc00000, data 0x71fdbae/0x73eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436c5dc00 session 0x559435f67a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea1400 session 0x559435d8d0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 heartbeat osd_stat(store_statfs(0x19a8f2000/0x0/0x1bfc00000, data 0x71fdbae/0x73eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594350c8800 session 0x559436f6a1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea0400 session 0x559437c57e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594450edc00 session 0x559437e56d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594350c8800 session 0x559437b0b4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 497999872 unmapped: 21774336 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5592888 data_alloc: 268435456 data_used: 55508992
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436c5dc00 session 0x559437b0af00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea0400 session 0x5594351d5860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea1400 session 0x559437d08d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.890111923s of 10.060988426s, submitted: 70
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559434fee000 session 0x559437b0b2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea3c00 session 0x559434c8c5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559434fee000 session 0x5594352861e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c9a000/0x0/0x1bfc00000, data 0x7e536da/0x8042000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5570362 data_alloc: 268435456 data_used: 55422976
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x5594350c8800 session 0x559437e574a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436c5dc00 session 0x559435ddc780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481050624 unmapped: 38723584 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481050624 unmapped: 38723584 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ee000/0x0/0x1bfc00000, data 0x7a026ca/0x7bf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481050624 unmapped: 38723584 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5503962 data_alloc: 251658240 data_used: 52387840
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 38715392 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea0400 session 0x559437e561e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 38707200 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.180594444s of 10.442781448s, submitted: 27
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 38699008 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x5594350c8800 session 0x559437c56d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 38699008 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436c5dc00 session 0x559437d08780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 33546240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5569534 data_alloc: 268435456 data_used: 63524864
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ee000/0x0/0x1bfc00000, data 0x7a026ca/0x7bf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 33546240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea3c00 session 0x559435e4fe00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ee000/0x0/0x1bfc00000, data 0x7a026ca/0x7bf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea1400 session 0x559436fa25a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5572827 data_alloc: 268435456 data_used: 63512576
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x55944816f000 session 0x559434c75860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ed000/0x0/0x1bfc00000, data 0x7a026da/0x7bf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.943758011s of 10.015493393s, submitted: 23
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 33439744 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 31662080 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5597135 data_alloc: 268435456 data_used: 63549440
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488153088 unmapped: 31621120 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199dd3000/0x0/0x1bfc00000, data 0x7d1d6ca/0x7f0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488161280 unmapped: 31612928 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x5594350c8800 session 0x559436c4d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199dd3000/0x0/0x1bfc00000, data 0x7d1d6ca/0x7f0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488161280 unmapped: 31612928 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436c5dc00 session 0x559436eaaf00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea1400 session 0x559434c754a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea3c00 session 0x559434c8d0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488226816 unmapped: 31547392 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488333312 unmapped: 31440896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5625910 data_alloc: 268435456 data_used: 64086016
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488423424 unmapped: 31350784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488423424 unmapped: 31350784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488423424 unmapped: 31350784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x7e5b6da/0x804a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5626230 data_alloc: 268435456 data_used: 64094208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x7e5b6da/0x804a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5626230 data_alloc: 268435456 data_used: 64094208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x7e5b6da/0x804a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488439808 unmapped: 31334400 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.505456924s of 19.853292465s, submitted: 154
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x55943a64dc00 session 0x559435de1860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488448000 unmapped: 31326208 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5625742 data_alloc: 268435456 data_used: 64143360
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c93000/0x0/0x1bfc00000, data 0x7e5b6fd/0x804b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea1400 session 0x559436ddf4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c93000/0x0/0x1bfc00000, data 0x7e5b6fd/0x804b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 371 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 372 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488595456 unmapped: 31178752 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 372 ms_handle_reset con 0x559436ea2400 session 0x559436f7fc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559435975000 session 0x559435152960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x55943cdd3c00 session 0x559436c4c5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea3c00 session 0x559435d8c000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559435975000 session 0x559435915c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea1400 session 0x559436f6b0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 heartbeat osd_stat(store_statfs(0x199918000/0x0/0x1bfc00000, data 0x81d1fcd/0x83c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5669033 data_alloc: 268435456 data_used: 64167936
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x5594350c8800 session 0x559437e57a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436c5dc00 session 0x559435142780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.431745529s of 10.299204826s, submitted: 54
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea2400 session 0x5594370ecb40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488882176 unmapped: 30892032 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x5594350c8800 session 0x559435143c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488882176 unmapped: 30892032 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559435975000 session 0x559435ddc1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 heartbeat osd_stat(store_statfs(0x199919000/0x0/0x1bfc00000, data 0x81d1faa/0x83c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488890368 unmapped: 30883840 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436c5dc00 session 0x559437c42960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5666929 data_alloc: 268435456 data_used: 64163840
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea2400 session 0x559434ce1860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea1400 session 0x559436eab2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489193472 unmapped: 30580736 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489209856 unmapped: 30564352 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x5594350c8800 session 0x559437b0a3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 374 ms_handle_reset con 0x559436ea2400 session 0x5594370f4b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 374 ms_handle_reset con 0x55943cdd3c00 session 0x5594359145a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489398272 unmapped: 30375936 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489414656 unmapped: 30359552 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 374 heartbeat osd_stat(store_statfs(0x199a25000/0x0/0x1bfc00000, data 0x80c5c47/0x82b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 374 handle_osd_map epochs [375,375], i have 375, src has [1,375]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 375 ms_handle_reset con 0x5594384a4c00 session 0x559437bca780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489447424 unmapped: 30326784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 375 heartbeat osd_stat(store_statfs(0x199a21000/0x0/0x1bfc00000, data 0x80c7910/0x82bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5663257 data_alloc: 268435456 data_used: 64131072
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489447424 unmapped: 30326784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489472000 unmapped: 30302208 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.482002735s of 10.422252655s, submitted: 203
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489512960 unmapped: 30261248 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489635840 unmapped: 30138368 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 375 heartbeat osd_stat(store_statfs(0x199a22000/0x0/0x1bfc00000, data 0x80c88f2/0x82bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,7,4])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490102784 unmapped: 29671424 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5682669 data_alloc: 268435456 data_used: 66539520
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490110976 unmapped: 29663232 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 29597696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 375 ms_handle_reset con 0x559435133800 session 0x559437b0a1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 375 handle_osd_map epochs [376,376], i have 376, src has [1,376]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 375 handle_osd_map epochs [376,376], i have 376, src has [1,376]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199a1e000/0x0/0x1bfc00000, data 0x80ca479/0x82bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,2,0,0,0,0,1,3])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490225664 unmapped: 29548544 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490242048 unmapped: 29532160 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490242048 unmapped: 29532160 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5688781 data_alloc: 268435456 data_used: 66609152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437bcaf00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea2400 session 0x559436ec70e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594384a4c00 session 0x559437cb0000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x55943cdd3c00 session 0x559434c754a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490250240 unmapped: 29523968 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490258432 unmapped: 29515776 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199a1d000/0x0/0x1bfc00000, data 0x80ca489/0x82c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,3,0,0,2])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490299392 unmapped: 29474816 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.000000000s of 10.280552864s, submitted: 113
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490323968 unmapped: 29450240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c5c800 session 0x559435f65860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490323968 unmapped: 29450240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5691198 data_alloc: 268435456 data_used: 66609152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199a1e000/0x0/0x1bfc00000, data 0x80ca489/0x82c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490332160 unmapped: 29442048 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 496320512 unmapped: 23453696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea1c00 session 0x5594370f4000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f94400 session 0x5594370ede00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 496320512 unmapped: 23453696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 500097024 unmapped: 19677184 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594358bf400 session 0x559435143c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea2400 session 0x559437c565a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594384a4c00 session 0x559436fa3c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19893f000/0x0/0x1bfc00000, data 0x91a9489/0x939f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,1,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 495886336 unmapped: 23887872 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5828710 data_alloc: 268435456 data_used: 66772992
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594358bf400 session 0x559434c62f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea1c00 session 0x559436f7f680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491405312 unmapped: 28368896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea2400 session 0x5594351d50e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f94400 session 0x559437d08960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491405312 unmapped: 28368896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559435ddc780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437b0bc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199681000/0x0/0x1bfc00000, data 0x7e7f427/0x8074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491405312 unmapped: 28368896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.534853995s of 10.019935608s, submitted: 142
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594358bf400 session 0x559436f7fe00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491339776 unmapped: 28434432 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491339776 unmapped: 28434432 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5603559 data_alloc: 251658240 data_used: 54120448
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491208704 unmapped: 28565504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x55943b0e8800 session 0x559437bcbe00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559436f6b4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491208704 unmapped: 28565504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199c5f000/0x0/0x1bfc00000, data 0x722c44a/0x7422000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,1,0,0,0,5])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559435975000 session 0x559436c4d860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c5dc00 session 0x559436f7f2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5505739 data_alloc: 268435456 data_used: 55472128
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19a931000/0x0/0x1bfc00000, data 0x71b744a/0x73ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,1,1,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f94400 session 0x559434c752c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.935825348s of 10.140642166s, submitted: 62
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19b7b4000/0x0/0x1bfc00000, data 0x633444a/0x652a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5375410 data_alloc: 251658240 data_used: 52854784
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559435975000 session 0x559434ff0b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559435ddc000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492339200 unmapped: 27435008 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492363776 unmapped: 27410432 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494755840 unmapped: 25018368 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19b042000/0x0/0x1bfc00000, data 0x6a9e44a/0x6c94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,1,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493895680 unmapped: 25878528 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5436136 data_alloc: 251658240 data_used: 53837824
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437cb05a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493961216 unmapped: 25812992 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559435a4f800 session 0x559435de10e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559437503400 session 0x559437bcab40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea0400 session 0x559436ddf0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434fee000 session 0x559437d09e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493158400 unmapped: 26615808 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493182976 unmapped: 26591232 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.901371241s of 10.082528114s, submitted: 220
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493182976 unmapped: 26591232 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 41730048 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058678 data_alloc: 234881024 data_used: 32534528
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19cd0c000/0x0/0x1bfc00000, data 0x4ddd43a/0x4fd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437d083c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559434c8c1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058129 data_alloc: 234881024 data_used: 32534528
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19cd0d000/0x0/0x1bfc00000, data 0x4ddd42a/0x4fd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c3e000 session 0x559435f67c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 41713664 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f95400 session 0x559437e56b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 41705472 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.358567715s of 10.010686874s, submitted: 74
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434fee000 session 0x5594351d50e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4976417 data_alloc: 234881024 data_used: 31948800
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x437e42a/0x4572000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x437e407/0x4571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4976417 data_alloc: 234881024 data_used: 31948800
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x55943beb5800 session 0x559435f43e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436e3bc00 session 0x559437cb1e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559436dde5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x437e407/0x4571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 44744704 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.540034294s of 10.271340370s, submitted: 46
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 44744704 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 44744704 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975096 data_alloc: 234881024 data_used: 31956992
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 52682752 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19e34d000/0x0/0x1bfc00000, data 0x37a03d4/0x3991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c3e000 session 0x559434ce0f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 52658176 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434fee000 session 0x559434c621e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19e350000/0x0/0x1bfc00000, data 0x37985d4/0x398a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 52658176 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 377 ms_handle_reset con 0x559436f95400 session 0x559437c57e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4827321 data_alloc: 234881024 data_used: 22282240
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 377 ms_handle_reset con 0x559434792c00 session 0x559437b0a5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 377 heartbeat osd_stat(store_statfs(0x19e34f000/0x0/0x1bfc00000, data 0x379a28f/0x398e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 377 handle_osd_map epochs [378,378], i have 378, src has [1,378]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x559434fee000 session 0x559436f0e960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x559436e3bc00 session 0x559435d8d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x559436c3e000 session 0x559434c65e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467083264 unmapped: 52690944 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.728199959s of 10.013930321s, submitted: 174
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x55943beb5800 session 0x559435143c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476651520 unmapped: 51101696 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 378 heartbeat osd_stat(store_statfs(0x19d379000/0x0/0x1bfc00000, data 0x476df5a/0x4965000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 378 handle_osd_map epochs [379,379], i have 379, src has [1,379]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 379 ms_handle_reset con 0x559434792c00 session 0x559436ddeb40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476651520 unmapped: 51101696 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5007198 data_alloc: 234881024 data_used: 30478336
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559434c62960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d375000/0x0/0x1bfc00000, data 0x476fc07/0x4968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559434c63c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436e3bc00 session 0x559435f643c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559437d090e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559434c654a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434792c00 session 0x559437d085a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559435ddc780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d372000/0x0/0x1bfc00000, data 0x47718de/0x496c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990616 data_alloc: 234881024 data_used: 30486528
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d372000/0x0/0x1bfc00000, data 0x47718de/0x496c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559436ddfc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436e3bc00 session 0x559435de10e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434792c00 session 0x559436c4d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559435915860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.562785149s of 10.035156250s, submitted: 37
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559435f67e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559436ec72c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436e3bc00 session 0x559436c510e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434792c00 session 0x559435e4f0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559435e4e3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559437b0ad20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992074 data_alloc: 234881024 data_used: 30486528
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559435e4fa40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d372000/0x0/0x1bfc00000, data 0x47718de/0x496c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470441984 unmapped: 57311232 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436e3bc00 session 0x559437e56f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470441984 unmapped: 57311232 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559437bcbe00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434fee000 session 0x559437cb05a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435a4f800 session 0x559437b0a1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470466560 unmapped: 57286656 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c3e000 session 0x559436c4d860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 57139200 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 heartbeat osd_stat(store_statfs(0x19d349000/0x0/0x1bfc00000, data 0x479743d/0x4995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435975000 session 0x559435ddd4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 57139200 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4998146 data_alloc: 234881024 data_used: 30617600
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559437c42780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435a4f800 session 0x559437b0a960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470032384 unmapped: 57720832 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c3e000 session 0x559437b0a3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c5dc00 session 0x559437e57680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436f94400 session 0x559434c62000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434fee000 session 0x559436f7eb40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481607680 unmapped: 46145536 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435a4f800 session 0x559436c4a3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559435915e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c3e000 session 0x559436ec72c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c5dc00 session 0x559435915860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 heartbeat osd_stat(store_statfs(0x19cb98000/0x0/0x1bfc00000, data 0x4f47466/0x5146000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559436c4d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434fee000 session 0x559436ddfc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472104960 unmapped: 55648256 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472104960 unmapped: 55648256 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 heartbeat osd_stat(store_statfs(0x19c18f000/0x0/0x1bfc00000, data 0x595049f/0x5b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472104960 unmapped: 55648256 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5203289 data_alloc: 251658240 data_used: 39280640
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c5dc00 session 0x559437d085a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.986348152s of 12.396906853s, submitted: 71
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 382 ms_handle_reset con 0x5594358bf400 session 0x559437d090e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472113152 unmapped: 55640064 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 382 ms_handle_reset con 0x55943b0e8800 session 0x559435f643c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 382 ms_handle_reset con 0x559434792c00 session 0x559434c63c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 382 handle_osd_map epochs [383,383], i have 383, src has [1,383]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472113152 unmapped: 55640064 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 383 ms_handle_reset con 0x559436c3e000 session 0x559436fa30e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 383 handle_osd_map epochs [384,384], i have 384, src has [1,384]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472121344 unmapped: 55631872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 384 ms_handle_reset con 0x559436c5dc00 session 0x559437e56780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470114304 unmapped: 57638912 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 384 ms_handle_reset con 0x55943beb5400 session 0x559437e574a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 384 ms_handle_reset con 0x559435a4f800 session 0x559435ddc780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470245376 unmapped: 57507840 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 385 heartbeat osd_stat(store_statfs(0x19bc70000/0x0/0x1bfc00000, data 0x5e68a8f/0x606d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5335349 data_alloc: 251658240 data_used: 49635328
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470245376 unmapped: 57507840 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 385 ms_handle_reset con 0x559434792c00 session 0x559436f0e960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470269952 unmapped: 57483264 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 385 ms_handle_reset con 0x559436c5dc00 session 0x559437bca5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470540288 unmapped: 57212928 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 385 ms_handle_reset con 0x559442cf2400 session 0x559436f0e780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 385 heartbeat osd_stat(store_statfs(0x19b10d000/0x0/0x1bfc00000, data 0x69c9776/0x6bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 386 ms_handle_reset con 0x55943beb5400 session 0x559437bca3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 386 ms_handle_reset con 0x5594372bb000 session 0x559434ce1860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471826432 unmapped: 55926784 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 386 ms_handle_reset con 0x559436c3e000 session 0x559437c57e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 386 heartbeat osd_stat(store_statfs(0x19b108000/0x0/0x1bfc00000, data 0x69cb44d/0x6bd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471826432 unmapped: 55926784 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5456103 data_alloc: 251658240 data_used: 52891648
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471834624 unmapped: 55918592 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471834624 unmapped: 55918592 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471851008 unmapped: 55902208 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19b104000/0x0/0x1bfc00000, data 0x69ccf8c/0x6bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.126157761s of 13.135083199s, submitted: 81
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 56705024 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475226112 unmapped: 52527104 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5496307 data_alloc: 251658240 data_used: 52883456
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475226112 unmapped: 52527104 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476626944 unmapped: 51126272 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476626944 unmapped: 51126272 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19ab9c000/0x0/0x1bfc00000, data 0x6f28f8c/0x7134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476626944 unmapped: 51126272 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 51519488 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5504855 data_alloc: 251658240 data_used: 54112256
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477290496 unmapped: 50462720 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19ab9b000/0x0/0x1bfc00000, data 0x6f35f8c/0x7141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x5594372bb000 session 0x559437bcab40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477298688 unmapped: 50454528 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480206848 unmapped: 47546368 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559434792c00 session 0x559434c75680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480206848 unmapped: 47546368 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.762639046s of 11.009344101s, submitted: 103
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480206848 unmapped: 47546368 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5538075 data_alloc: 251658240 data_used: 54456320
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 49831936 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 49823744 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 49823744 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559434c8d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x55943beb5400 session 0x559434c630e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 49815552 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 49807360 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5533515 data_alloc: 268435456 data_used: 55549952
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 46678016 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 46055424 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 46055424 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 46039040 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 46039040 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5596683 data_alloc: 268435456 data_used: 61296640
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.516175270s of 10.890303612s, submitted: 4
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481812480 unmapped: 45940736 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559436f0e960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x5594372bb000 session 0x559437b0b680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436ea0400 session 0x559437bcad20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559437503400 session 0x559436ec6780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5609160 data_alloc: 268435456 data_used: 64102400
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a553000/0x0/0x1bfc00000, data 0x716dffe/0x737b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482607104 unmapped: 45146112 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491315200 unmapped: 36438016 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494116864 unmapped: 33636352 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x1999f8000/0x0/0x1bfc00000, data 0x825cffe/0x7ed6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,16])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493355008 unmapped: 34398208 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5730330 data_alloc: 268435456 data_used: 63557632
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.637902498s of 10.000240326s, submitted: 133
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493117440 unmapped: 34635776 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493117440 unmapped: 34635776 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493142016 unmapped: 34611200 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493158400 unmapped: 34594816 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559442cf2400 session 0x559434c754a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493166592 unmapped: 34586624 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5744035 data_alloc: 268435456 data_used: 64389120
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x199a6b000/0x0/0x1bfc00000, data 0x82f6fee/0x7e61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,2])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493273088 unmapped: 34480128 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559436f7f4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494362624 unmapped: 36544512 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436ea0400 session 0x559437c56f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x5594372bb000 session 0x559436f0f2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559437503400 session 0x559435152780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559442cf2400 session 0x5594351d5860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559436f0f860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494379008 unmapped: 36528128 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436ea0400 session 0x559436f7f2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494387200 unmapped: 36519936 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494395392 unmapped: 36511744 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559437503400 session 0x559437e532c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5793908 data_alloc: 268435456 data_used: 64753664
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.007493496s of 10.075882912s, submitted: 128
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 heartbeat osd_stat(store_statfs(0x19952b000/0x0/0x1bfc00000, data 0x8835c9b/0x83a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494403584 unmapped: 36503552 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x5594372bb000 session 0x559437b0af00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559436bd2400 session 0x5594370f45a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559436c5dc00 session 0x559435e4fa40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494419968 unmapped: 36487168 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559434792c00 session 0x559435de12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x5594372bb000 session 0x559436c4d2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559436c3e000 session 0x559436c4dc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559437503400 session 0x559437cb0d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559434792c00 session 0x559437e52960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494419968 unmapped: 36487168 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 389 ms_handle_reset con 0x559436ea0400 session 0x559435f65680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494428160 unmapped: 36478976 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494747648 unmapped: 36159488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5643707 data_alloc: 268435456 data_used: 58109952
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494755840 unmapped: 36151296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 389 handle_osd_map epochs [390,390], i have 390, src has [1,390]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 390 heartbeat osd_stat(store_statfs(0x19a4f6000/0x0/0x1bfc00000, data 0x786a3fb/0x73d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494764032 unmapped: 36143104 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 390 ms_handle_reset con 0x559436ea1c00 session 0x559437d08780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 390 ms_handle_reset con 0x559436ea2400 session 0x5594351523c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494772224 unmapped: 36134912 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494788608 unmapped: 36118528 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492118016 unmapped: 38789120 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5433933 data_alloc: 251658240 data_used: 44068864
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 392 ms_handle_reset con 0x55944816f000 session 0x559437c56d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 392 ms_handle_reset con 0x559436f94800 session 0x559437cb0d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.643511295s of 10.026472092s, submitted: 117
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 392 ms_handle_reset con 0x55943d785c00 session 0x559435914780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492126208 unmapped: 38780928 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x559436ea0400 session 0x559436f7f2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x559434792c00 session 0x559436fa3c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x5594372bb000 session 0x559437105860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 393 heartbeat osd_stat(store_statfs(0x19c876000/0x0/0x1bfc00000, data 0x631f810/0x6096000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 393 heartbeat osd_stat(store_statfs(0x19e69c000/0x0/0x1bfc00000, data 0x3e3878b/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x559434792c00 session 0x559437b0b680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5067049 data_alloc: 234881024 data_used: 32141312
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 394 ms_handle_reset con 0x559436ea0400 session 0x559434c8d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487694336 unmapped: 43212800 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488497152 unmapped: 42409984 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 394 heartbeat osd_stat(store_statfs(0x19e567000/0x0/0x1bfc00000, data 0x41950d6/0x439f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 42401792 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 42401792 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 42401792 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5097737 data_alloc: 234881024 data_used: 30162944
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.323350906s of 10.089933395s, submitted: 174
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487841792 unmapped: 43065344 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e569000/0x0/0x1bfc00000, data 0x4198c15/0x43a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487841792 unmapped: 43065344 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487841792 unmapped: 43065344 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3e000 session 0x559436ddeb40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 43057152 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c5dc00 session 0x559435d8d0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 43057152 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5097699 data_alloc: 234881024 data_used: 30175232
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff6000/0x0/0x1bfc00000, data 0x370cc15/0x3918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483573760 unmapped: 47333376 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483581952 unmapped: 47325184 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff6000/0x0/0x1bfc00000, data 0x370cbb3/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437c430e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4982284 data_alloc: 234881024 data_used: 24694784
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff7000/0x0/0x1bfc00000, data 0x370cba3/0x3916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436c4ab40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.068715096s of 12.702533722s, submitted: 69
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559436c4a000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434c65e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4979220 data_alloc: 234881024 data_used: 24694784
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff8000/0x0/0x1bfc00000, data 0x370cba3/0x3916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,8])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434ff0d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 66K writes, 260K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s#012Cumulative WAL: 66K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8130 writes, 30K keys, 8130 commit groups, 1.0 writes per commit group, ingest: 33.16 MB, 0.06 MB/s#012Interval WAL: 8131 writes, 3070 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3e000 session 0x559437e561e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559436f7e3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559437bcb680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559436ec6d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.497853279s of 17.155778885s, submitted: 27
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: mgrc ms_handle_reset ms_handle_reset con 0x5594350d6c00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470188032 unmapped: 60719104 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3d800 session 0x559437c432c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 55402496 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943a64d800 session 0x559436dde960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559437cb0780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436c50f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559436c51a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434c623c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x5594352863c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434c6e960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4858586 data_alloc: 218103808 data_used: 12795904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436ddfc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea0400 session 0x559434c63a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434c65680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470532096 unmapped: 64053248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470540288 unmapped: 64045056 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.158052444s of 18.919492722s, submitted: 23
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473210880 unmapped: 61374464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955009 data_alloc: 234881024 data_used: 21942272
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473415680 unmapped: 61169664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e12d000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959757 data_alloc: 234881024 data_used: 21839872
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437bcab40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953597 data_alloc: 234881024 data_used: 21839872
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559437c57e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437bcb860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.585352898s of 14.913706779s, submitted: 79
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559434c630e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.042637825s of 15.562180519s, submitted: 27
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468082688 unmapped: 66502656 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437b0b680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559436ddfc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559435e4f0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883222 data_alloc: 218103808 data_used: 12795904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437b0a960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351521e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467337216 unmapped: 67248128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e333000/0x0/0x1bfc00000, data 0x3231ba3/0x343b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883106 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559435f61e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467353600 unmapped: 67231744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.326089859s of 19.245832443s, submitted: 49
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19dbdd000/0x0/0x1bfc00000, data 0x3980bc6/0x3b8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471351296 unmapped: 63234048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5037977 data_alloc: 234881024 data_used: 24559616
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db4a000/0x0/0x1bfc00000, data 0x3a13bc6/0x3c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039897 data_alloc: 234881024 data_used: 24702976
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2e000/0x0/0x1bfc00000, data 0x3a35bc6/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x559437e56f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039685 data_alloc: 234881024 data_used: 24715264
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435286d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.604205132s of 15.311234474s, submitted: 113
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434ce12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437cb0d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039749 data_alloc: 234881024 data_used: 24715264
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db28000/0x0/0x1bfc00000, data 0x3a3abd5/0x3c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437e56b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809599 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd6000/0x0/0x1bfc00000, data 0x278db50/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 67854336 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559437e565a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67829760 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x5594351423c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466812928 unmapped: 67772416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.131870270s of 10.003231049s, submitted: 300
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466829312 unmapped: 67756032 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809039 data_alloc: 218103808 data_used: 12795904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4807462 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.209687233s of 10.373024940s, submitted: 38
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435f643c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809244 data_alloc: 218103808 data_used: 12791808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd9000/0x0/0x1bfc00000, data 0x278db31/0x2995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x5594386f9400 session 0x559436ec6b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x559436ea2400 session 0x559437e52f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 396 heartbeat osd_stat(store_statfs(0x19edd4000/0x0/0x1bfc00000, data 0x278f7ec/0x2999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434792c00 session 0x559437105e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434fee000 session 0x559435152960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816765 data_alloc: 218103808 data_used: 12808192
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.111035347s of 10.823334694s, submitted: 35
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.684392929s of 18.692880630s, submitted: 12
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437b0ad20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4824530 data_alloc: 218103808 data_used: 12816384
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edcd000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bcad20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435de12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4823650 data_alloc: 218103808 data_used: 12816384
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559437bca960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edce000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436fa2780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437d090e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559435d8d860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435142b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559436c514a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851916 data_alloc: 218103808 data_used: 12816384
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436c4d860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559435152d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.884056091s of 13.213858604s, submitted: 20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bca780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467066880 unmapped: 67518464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467075072 unmapped: 67510272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883747 data_alloc: 218103808 data_used: 16490496
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55943cdd2800 session 0x5594351423c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885237 data_alloc: 218103808 data_used: 16494592
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.991956711s of 10.035635948s, submitted: 8
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19ea20000/0x0/0x1bfc00000, data 0x2b3d85b/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559434fee000 session 0x559437c572c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468312064 unmapped: 66273280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909236 data_alloc: 218103808 data_used: 16621568
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.320636749s of 19.659267426s, submitted: 22
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559436ea1c00 session 0x5594370ede00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469295104 unmapped: 65290240 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x5594386f9400 session 0x559436c512c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d729000/0x0/0x1bfc00000, data 0x2c967f9/0x2ea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4914910 data_alloc: 218103808 data_used: 16629760
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436cb0c00 session 0x559436fa2000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559436c4a000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559434792c00 session 0x559434ce0780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559434c63a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917912 data_alloc: 218103808 data_used: 16633856
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470343680 unmapped: 64241664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918844 data_alloc: 218103808 data_used: 16736256
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.274785995s of 26.369352341s, submitted: 37
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920738 data_alloc: 218103808 data_used: 16736256
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922338 data_alloc: 218103808 data_used: 17092608
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922590 data_alloc: 218103808 data_used: 17092608
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.667269707s of 12.846582413s, submitted: 6
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436cb0c00 session 0x559435142780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x5594386f9400 session 0x559434c8c5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x559434c63c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559434fee000 session 0x559435f65680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x5594370ecf00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843049 data_alloc: 218103808 data_used: 12832768
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d814000/0x0/0x1bfc00000, data 0x2799fb2/0x29a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843097 data_alloc: 218103808 data_used: 12832768
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.570983887s of 10.014533043s, submitted: 55
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 403 ms_handle_reset con 0x559434792c00 session 0x559436dde780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 403 heartbeat osd_stat(store_statfs(0x19d811000/0x0/0x1bfc00000, data 0x279bc5f/0x29ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 404 ms_handle_reset con 0x559436c5a800 session 0x559437e565a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559436c50f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c6e960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4852403 data_alloc: 218103808 data_used: 12840960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559437b0b4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.887578964s of 17.205017090s, submitted: 46
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb0c00 session 0x559435de1e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594351d4d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467877888 unmapped: 66707456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559437d08d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x5594377ae780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4888135 data_alloc: 218103808 data_used: 12840960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x5594386f9400 session 0x559437d081e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.796065331s of 10.041505814s, submitted: 15
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c8c5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4926114 data_alloc: 218103808 data_used: 17371136
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472064000 unmapped: 62521344 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19ceb5000/0x0/0x1bfc00000, data 0x30f6413/0x3309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.967789173s of 10.081938744s, submitted: 32
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468148224 unmapped: 66437120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4969600 data_alloc: 218103808 data_used: 17580032
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,5])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda1000/0x0/0x1bfc00000, data 0x3208413/0x341b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.758423805s of 18.601793289s, submitted: 23
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.873358727s of 11.896842003s, submitted: 17
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975934 data_alloc: 218103808 data_used: 17391616
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d08d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559435ddc1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559434c6e960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468254720 unmapped: 66330624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.424508095s of 15.441400528s, submitted: 12
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.449235916s of 15.471648216s, submitted: 15
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb1c00 session 0x559437cb0d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4982142 data_alloc: 218103808 data_used: 17915904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594371054a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437ead000 session 0x559436c4ba40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437438000 session 0x559435ddd680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861028 data_alloc: 218103808 data_used: 12840960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80c000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 406 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4865202 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.777113914s of 15.972728729s, submitted: 61
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 65200128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 65167360 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.048465729s of 45.059295654s, submitted: 15
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x5594351d41e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436fa2960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559435286000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559437c425a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437ead000 session 0x559435153860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1b000/0x0/0x1bfc00000, data 0x318dbff/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559436ddeb40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559435f65860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437b0b860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436c4a3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 72630272 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.403457642s of 18.507741928s, submitted: 19
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c465000/0x0/0x1bfc00000, data 0x3b3cc0f/0x3d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5102963 data_alloc: 234881024 data_used: 24010752
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.284965515s of 12.586762428s, submitted: 112
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x55943cdd2800 session 0x559435ddc780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437c42780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559434c63a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x5594370ed4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559434c65e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436ec6b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559435152780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 61.791675568s of 61.911861420s, submitted: 35
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436f0f4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2c0f/0x29b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437e53c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437c42f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594350d7000 session 0x559437c565a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559437b0bc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559437e52960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918731 data_alloc: 218103808 data_used: 12849152
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559435915c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f5000/0x0/0x1bfc00000, data 0x2cb2c0f/0x2ec9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559436f6b680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594358acc00 session 0x559437c56b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.014217377s of 17.533912659s, submitted: 26
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 474546176 unmapped: 68435968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5014409 data_alloc: 218103808 data_used: 18165760
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7e2000/0x0/0x1bfc00000, data 0x37c4c32/0x39dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5048133 data_alloc: 218103808 data_used: 18374656
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7dc000/0x0/0x1bfc00000, data 0x37cac32/0x39e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5047641 data_alloc: 218103808 data_used: 18374656
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559436ec72c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.816077232s of 12.077077866s, submitted: 68
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [408,408], i have 408, src has [1,408]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436ea1c00 session 0x559435ddc000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 408 heartbeat osd_stat(store_statfs(0x19c7d8000/0x0/0x1bfc00000, data 0x37cec32/0x39e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476463104 unmapped: 66519040 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559434792800 session 0x559436ddf4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436e8c800 session 0x559436fa34a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476594176 unmapped: 66387968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487800832 unmapped: 61882368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5197727 data_alloc: 234881024 data_used: 29822976
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559442cf9800 session 0x5594351d4f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487817216 unmapped: 61865984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 409 ms_handle_reset con 0x559434792800 session 0x559434ce12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487825408 unmapped: 61857792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488038400 unmapped: 61644800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214357 data_alloc: 234881024 data_used: 29831168
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516270638s of 12.276865005s, submitted: 37
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436e8c800 session 0x559436c4d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437bcb4a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436ea1c00 session 0x559435f67c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x5594384a4000 session 0x559435e4e780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559434792800 session 0x559437c43a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481468416 unmapped: 68214784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba44000/0x0/0x1bfc00000, data 0x455bd4e/0x4779000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437c42b40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 68059136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 68050944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 67788800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.335296631s of 16.486038208s, submitted: 17
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482648064 unmapped: 67035136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280461 data_alloc: 251658240 data_used: 38633472
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5285421 data_alloc: 251658240 data_used: 39116800
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030493736s of 10.215833664s, submitted: 10
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5286573 data_alloc: 251658240 data_used: 39518208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436f95400 session 0x559436f0e960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5288733 data_alloc: 251658240 data_used: 39571456
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.819889069s of 19.845161438s, submitted: 9
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559437eac800 session 0x559437c561e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943beb7000 session 0x559437c43e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943776fc00 session 0x559437bcaf00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559434792800 session 0x559434ce0960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 412 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487571456 unmapped: 62111744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 413 ms_handle_reset con 0x559435132800 session 0x559435de12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487579648 unmapped: 62103552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559436f95400 session 0x559434c62000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x5594377af680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x559435915e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559434792800 session 0x5594359145a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487227392 unmapped: 62455808 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 414 heartbeat osd_stat(store_statfs(0x199ba7000/0x0/0x1bfc00000, data 0x63f235e/0x6616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487235584 unmapped: 62447616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5538217 data_alloc: 251658240 data_used: 44425216
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559435132800 session 0x5594351d4d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436e8c800 session 0x559434c8dc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436ea1c00 session 0x559434ff1860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 415 heartbeat osd_stat(store_statfs(0x19b9cc000/0x0/0x1bfc00000, data 0x45ccfb5/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 416 ms_handle_reset con 0x559434792800 session 0x559436f6a1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5321486 data_alloc: 251658240 data_used: 44027904
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 416 heartbeat osd_stat(store_statfs(0x19b9ee000/0x0/0x1bfc00000, data 0x4564a8b/0x4787000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.964550972s of 13.325811386s, submitted: 218
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487325696 unmapped: 62357504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483909632 unmapped: 65773568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137584 data_alloc: 234881024 data_used: 29847552
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434792c00 session 0x5594351d4780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434fee000 session 0x559436c512c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483917824 unmapped: 65765376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 75980800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559436e8c800 session 0x559436f7f2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4949106 data_alloc: 218103808 data_used: 12886016
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.211503029s of 10.501904488s, submitted: 95
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 69K writes, 270K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 69K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2949 writes, 9921 keys, 2949 commit groups, 1.0 writes per commit group, ingest: 9.42 MB, 0.02 MB/s#012Interval WAL: 2949 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.431304932s of 35.444664001s, submitted: 15
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954123 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436f7f860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4d2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437bca3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.737661362s of 12.753558159s, submitted: 5
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x559437bcab40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955630 data_alloc: 218103808 data_used: 12898304
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436c4a5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559436c51680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954907 data_alloc: 218103808 data_used: 12898304
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.274845123s of 10.169509888s, submitted: 30
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559435d8d860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954212 data_alloc: 218103808 data_used: 12894208
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437b0be00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x5594377afc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x5594352874a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953898 data_alloc: 218103808 data_used: 16105472
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.661519051s of 10.000641823s, submitted: 19
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4dc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5032045 data_alloc: 218103808 data_used: 16105472
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19cde8000/0x0/0x1bfc00000, data 0x31b1d9f/0x33d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.522792816s of 12.859356880s, submitted: 17
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559436f95400 session 0x559436dde1e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559437bcb860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559435ddcd20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436f7f680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559437e53c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559437b0a3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cdc0000/0x0/0x1bfc00000, data 0x31d7a5a/0x33fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559436f6be00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436c510e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5113591 data_alloc: 234881024 data_used: 24510464
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.465334892s of 10.475492477s, submitted: 2
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559435286d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476913664 unmapped: 72769536 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde4000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559435915860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x55943776fc00 session 0x559437c43860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559437b0ad20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434792c00 session 0x559435152d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19cde1000/0x0/0x1bfc00000, data 0x31b56a5/0x33dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434fee000 session 0x559434c8d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472285184 unmapped: 77398016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.788832664s of 10.246772766s, submitted: 174
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19d7dc000/0x0/0x1bfc00000, data 0x27bb6a5/0x29e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559437eac800 session 0x559437d08d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559436f68800 session 0x559437d09c20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 72843264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476856320 unmapped: 72826880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x5594351d5860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4973367 data_alloc: 218103808 data_used: 16130048
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476848128 unmapped: 72835072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972486 data_alloc: 218103808 data_used: 16134144
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594351d4f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.671635628s of 10.225932121s, submitted: 147
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478871552 unmapped: 70811648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd20d/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436ec6d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477257728 unmapped: 72425472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559437eac800 session 0x559436ddeb40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559435132000 session 0x559436c4c780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x559437d08960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594370f4f00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.234014511s of 20.326173782s, submitted: 33
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436f7ed20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.673128128s of 12.695754051s, submitted: 6
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483557376 unmapped: 66125824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19cd68000/0x0/0x1bfc00000, data 0x2e1d246/0x3046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,5])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482705408 unmapped: 66977792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5106862 data_alloc: 234881024 data_used: 19513344
Oct  2 09:34:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482721792 unmapped: 66961408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c637000/0x0/0x1bfc00000, data 0x3548246/0x3771000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:18.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123676 data_alloc: 234881024 data_used: 20525056
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.902449608s of 11.393979073s, submitted: 107
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484753408 unmapped: 64929792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5121647 data_alloc: 234881024 data_used: 20537344
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437b0a3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61e000/0x0/0x1bfc00000, data 0x3565274/0x3790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.034794807s of 11.101709366s, submitted: 10
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x55943a64d400 session 0x559435e4f0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437bcb860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5126189 data_alloc: 234881024 data_used: 20541440
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559434ff12c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x5594352874a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x5594377afc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5134778 data_alloc: 234881024 data_used: 20692992
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.602036476s of 11.628366470s, submitted: 7
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157810 data_alloc: 234881024 data_used: 20692992
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ec000/0x0/0x1bfc00000, data 0x389c070/0x37c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157842 data_alloc: 234881024 data_used: 20692992
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484892672 unmapped: 64790528 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484253696 unmapped: 65429504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484261888 unmapped: 65421312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c4ea000/0x0/0x1bfc00000, data 0x399e070/0x38c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5190426 data_alloc: 234881024 data_used: 23044096
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.007882118s of 15.059731483s, submitted: 11
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5219236 data_alloc: 234881024 data_used: 23040000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5213860 data_alloc: 234881024 data_used: 23044096
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.056324005s of 11.241897583s, submitted: 37
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c457000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559436c51680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559437cb1a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x559437d081e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c486000/0x0/0x1bfc00000, data 0x3a3003d/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5204715 data_alloc: 234881024 data_used: 22904832
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437cb1e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.740118027s of 13.175251007s, submitted: 17
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437c57a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184485 data_alloc: 234881024 data_used: 22790144
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559437d09860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559437eac800 session 0x559435152d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559436c5bc00 session 0x559436c51e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5172513 data_alloc: 234881024 data_used: 22794240
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c616000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.032597542s of 10.602803230s, submitted: 41
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559434792c00 session 0x559435d8d680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5175215 data_alloc: 234881024 data_used: 22802432
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19c613000/0x0/0x1bfc00000, data 0x356a829/0x379a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006019 data_alloc: 218103808 data_used: 16154624
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d012000/0x0/0x1bfc00000, data 0x27c2829/0x29f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 425 ms_handle_reset con 0x559434fee000 session 0x5594351430e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c27c7/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.051541328s of 11.549398422s, submitted: 39
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5004722 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c4456/0x29f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482099200 unmapped: 67584000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 426 ms_handle_reset con 0x559434792800 session 0x559436eab2c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5007079 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.208436966s of 10.811527252s, submitted: 28
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559435d8d0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559435153860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559436c4b0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.400998116s of 10.534256935s, submitted: 15
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594358ad400 session 0x559436fa30e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,1,0,1,4])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435de10e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1af90/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559436fa30e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485810176 unmapped: 63873024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.333019257s of 21.654922485s, submitted: 47
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559437d09860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5058092 data_alloc: 218103808 data_used: 16162816
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5095692 data_alloc: 234881024 data_used: 21016576
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096012 data_alloc: 234881024 data_used: 21024768
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.898363113s of 14.440773964s, submitted: 5
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487833600 unmapped: 61849600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19cb00000/0x0/0x1bfc00000, data 0x307dfec/0x32ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140300 data_alloc: 234881024 data_used: 21843968
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5138620 data_alloc: 234881024 data_used: 21848064
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559437d081e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436e8d000 session 0x559436f6b680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.954394341s of 10.175483704s, submitted: 76
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435f67e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5dc00 session 0x559435d8c3c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594450ed400 session 0x559437d08000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559436fa3860
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.853529930s of 15.902283669s, submitted: 13
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559435286d20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.303579330s of 12.341936111s, submitted: 8
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5144999 data_alloc: 234881024 data_used: 22102016
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.465478897s of 13.490488052s, submitted: 6
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c8d0e0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488128512 unmapped: 61554688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5177867 data_alloc: 234881024 data_used: 22937600
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x5594377aed20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943725c800 session 0x559435d8c780
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179373 data_alloc: 234881024 data_used: 22937600
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488153088 unmapped: 61530112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.730504036s of 11.801014900s, submitted: 24
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559436f7fc20
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488456192 unmapped: 61227008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183366 data_alloc: 234881024 data_used: 22941696
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488464384 unmapped: 61218816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.934249878s of 12.953603745s, submitted: 6
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5225783 data_alloc: 234881024 data_used: 24535040
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c62f000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.297815323s of 15.375535965s, submitted: 9
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559437c56960
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.852976799s of 10.868772507s, submitted: 5
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x559436f7eb40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c6f680
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220223 data_alloc: 234881024 data_used: 24690688
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c653000/0x0/0x1bfc00000, data 0x36ecc22/0x375a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437503400 session 0x559437b0a5a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559437cb1e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559435f61e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5188901 data_alloc: 234881024 data_used: 24559616
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559437438c00 session 0x559437c565a0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3d000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918631554s of 10.093006134s, submitted: 66
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559436c3e400 session 0x559435f61a40
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437105e00
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5182857 data_alloc: 234881024 data_used: 24567808
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3e000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437c56000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19ca3a000/0x0/0x1bfc00000, data 0x314040e/0x3373000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5186216 data_alloc: 234881024 data_used: 24576000
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559434792800 session 0x559437e532c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559436c3e400 session 0x559437c432c0
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 63029248 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'config show' '{prefix=config show}'
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485990400 unmapped: 63692800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:34:18 np0005466030 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:34:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:34:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/874236552' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:34:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:18.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:18 np0005466030 nova_compute[230518]: 2025-10-02 13:34:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:34:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184456124' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:34:18 np0005466030 podman[323114]: 2025-10-02 13:34:18.826080847 +0000 UTC m=+0.071417649 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  2 09:34:18 np0005466030 podman[323112]: 2025-10-02 13:34:18.864033287 +0000 UTC m=+0.108818372 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct  2 09:34:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:34:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/584366054' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:34:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 09:34:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2205226372' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 09:34:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:20.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 09:34:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3914800062' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 09:34:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:20.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 09:34:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/388357450' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1289503258' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579968663' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1520222773' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2065690386' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1792309887' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 09:34:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1117553503' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 09:34:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3583909278' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2658690698' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  2 09:34:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:22.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2701917429' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  2 09:34:22 np0005466030 nova_compute[230518]: 2025-10-02 13:34:22.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  2 09:34:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2972436643' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2796163003' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 09:34:23 np0005466030 systemd[1]: Starting Hostname Service...
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3813509163' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  2 09:34:23 np0005466030 systemd[1]: Started Hostname Service.
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3027924521' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct  2 09:34:23 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3331320437' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  2 09:34:23 np0005466030 nova_compute[230518]: 2025-10-02 13:34:23.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:24.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:24.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 09:34:24 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2676800153' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 09:34:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct  2 09:34:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1049783261' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  2 09:34:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct  2 09:34:25 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1875449801' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  2 09:34:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:34:25.994 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:34:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:34:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:34:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3910303546' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:34:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:26.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:26 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct  2 09:34:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1068626528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  2 09:34:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:34:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:26.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:34:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:26 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:27 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct  2 09:34:27 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4044084775' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  2 09:34:27 np0005466030 podman[324562]: 2025-10-02 13:34:27.447069222 +0000 UTC m=+0.056198162 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  2 09:34:27 np0005466030 podman[324562]: 2025-10-02 13:34:27.55063554 +0000 UTC m=+0.159764460 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  2 09:34:27 np0005466030 nova_compute[230518]: 2025-10-02 13:34:27.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:28.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:28 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct  2 09:34:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1130845526' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  2 09:34:28 np0005466030 nova_compute[230518]: 2025-10-02 13:34:28.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:34:28 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3337079194' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241500397' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2024816665' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.484867) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069484898, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 853, "num_deletes": 251, "total_data_size": 1252296, "memory_usage": 1268768, "flush_reason": "Manual Compaction"}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069490115, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 825939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88953, "largest_seqno": 89801, "table_properties": {"data_size": 821354, "index_size": 1980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13647, "raw_average_key_size": 22, "raw_value_size": 811086, "raw_average_value_size": 1320, "num_data_blocks": 84, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412039, "oldest_key_time": 1759412039, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 5276 microseconds, and 2327 cpu microseconds.
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.490145) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 825939 bytes OK
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.490161) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491146) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491159) EVENT_LOG_v1 {"time_micros": 1759412069491155, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491173) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1247214, prev total WAL file size 1247214, number of live WAL files 2.
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(806KB)], [183(11MB)]
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069491877, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13275762, "oldest_snapshot_seqno": -1}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 10981 keys, 11369014 bytes, temperature: kUnknown
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069567797, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 11369014, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11301792, "index_size": 38693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 290853, "raw_average_key_size": 26, "raw_value_size": 11113650, "raw_average_value_size": 1012, "num_data_blocks": 1453, "num_entries": 10981, "num_filter_entries": 10981, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.568194) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11369014 bytes
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.569582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.4 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(29.8) write-amplify(13.8) OK, records in: 11497, records dropped: 516 output_compression: NoCompression
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.569601) EVENT_LOG_v1 {"time_micros": 1759412069569592, "job": 118, "event": "compaction_finished", "compaction_time_micros": 76127, "compaction_time_cpu_micros": 33125, "output_level": 6, "num_output_files": 1, "total_output_size": 11369014, "num_input_records": 11497, "num_output_records": 10981, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069570249, "job": 118, "event": "table_file_deletion", "file_number": 185}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069573055, "job": 118, "event": "table_file_deletion", "file_number": 183}
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:34:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:34:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2142265629' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct  2 09:34:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4641624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct  2 09:34:30 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245435999' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  2 09:34:32 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct  2 09:34:32 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2703532931' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct  2 09:34:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:32.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:32 np0005466030 nova_compute[230518]: 2025-10-02 13:34:32.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct  2 09:34:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4169906255' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct  2 09:34:33 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct  2 09:34:33 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2495208596' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct  2 09:34:33 np0005466030 nova_compute[230518]: 2025-10-02 13:34:33.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct  2 09:34:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/159196570' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct  2 09:34:35 np0005466030 ovs-appctl[326500]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:34:35 np0005466030 ovs-appctl[326507]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:34:35 np0005466030 ovs-appctl[326519]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  2 09:34:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct  2 09:34:35 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2244708378' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct  2 09:34:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:36.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:36.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:34:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3348321435' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:34:36 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct  2 09:34:36 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2985896459' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct  2 09:34:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct  2 09:34:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1199591392' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct  2 09:34:37 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:37 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2982575882' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:37 np0005466030 nova_compute[230518]: 2025-10-02 13:34:37.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:37 np0005466030 podman[327236]: 2025-10-02 13:34:37.858381014 +0000 UTC m=+0.103217667 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:34:37 np0005466030 podman[327233]: 2025-10-02 13:34:37.886253318 +0000 UTC m=+0.130904365 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:34:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:34:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3211568006' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:34:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:38.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:38 np0005466030 nova_compute[230518]: 2025-10-02 13:34:38.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:38 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct  2 09:34:38 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1232340834' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct  2 09:34:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:39 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1855449270' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:40.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:41 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct  2 09:34:41 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3643781993' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct  2 09:34:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:42.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct  2 09:34:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/753413889' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct  2 09:34:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:42.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:42 np0005466030 nova_compute[230518]: 2025-10-02 13:34:42.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:42 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:42 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3820141928' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:34:43 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct  2 09:34:43 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/957710283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct  2 09:34:43 np0005466030 nova_compute[230518]: 2025-10-02 13:34:43.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:44.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct  2 09:34:44 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2314818960' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct  2 09:34:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:45 np0005466030 nova_compute[230518]: 2025-10-02 13:34:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct  2 09:34:45 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1588464893' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct  2 09:34:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:46.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:34:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/10710595' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:34:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:46.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:46 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct  2 09:34:46 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3352078029' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct  2 09:34:46 np0005466030 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:34:47 np0005466030 systemd[1]: Starting Time & Date Service...
Oct  2 09:34:47 np0005466030 systemd[1]: Started Time & Date Service.
Oct  2 09:34:47 np0005466030 nova_compute[230518]: 2025-10-02 13:34:47.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:47 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  2 09:34:47 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2260468465' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  2 09:34:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:48.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:48 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct  2 09:34:48 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1666422989' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct  2 09:34:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:48 np0005466030 nova_compute[230518]: 2025-10-02 13:34:48.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct  2 09:34:49 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/241776807' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct  2 09:34:49 np0005466030 podman[328432]: 2025-10-02 13:34:49.832961634 +0000 UTC m=+0.072340459 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:34:49 np0005466030 podman[328433]: 2025-10-02 13:34:49.851390742 +0000 UTC m=+0.094375510 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  2 09:34:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:50 np0005466030 nova_compute[230518]: 2025-10-02 13:34:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:50 np0005466030 nova_compute[230518]: 2025-10-02 13:34:50.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:34:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:50.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.146 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.146 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.147 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.147 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.148 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:34:51 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:34:51 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1433318012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.565 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.726 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.728 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3981MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:34:51 np0005466030 nova_compute[230518]: 2025-10-02 13:34:51.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.165 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.166 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.187 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:34:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:52.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:52.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:34:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3555859192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.636 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.641 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.673 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.674 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.674 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:34:52 np0005466030 nova_compute[230518]: 2025-10-02 13:34:52.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:53 np0005466030 nova_compute[230518]: 2025-10-02 13:34:53.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:34:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:54.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:34:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:34:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:57 np0005466030 nova_compute[230518]: 2025-10-02 13:34:57.675 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:57 np0005466030 nova_compute[230518]: 2025-10-02 13:34:57.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:58.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:34:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:34:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:34:58 np0005466030 nova_compute[230518]: 2025-10-02 13:34:58.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:34:59 np0005466030 nova_compute[230518]: 2025-10-02 13:34:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:34:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:00 np0005466030 nova_compute[230518]: 2025-10-02 13:35:00.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:00.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:01 np0005466030 nova_compute[230518]: 2025-10-02 13:35:01.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:02 np0005466030 nova_compute[230518]: 2025-10-02 13:35:02.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:02.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:02.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:02 np0005466030 nova_compute[230518]: 2025-10-02 13:35:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:03 np0005466030 nova_compute[230518]: 2025-10-02 13:35:03.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:04.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:06 np0005466030 nova_compute[230518]: 2025-10-02 13:35:06.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:06 np0005466030 nova_compute[230518]: 2025-10-02 13:35:06.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:06 np0005466030 nova_compute[230518]: 2025-10-02 13:35:06.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:35:06 np0005466030 nova_compute[230518]: 2025-10-02 13:35:06.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:35:06 np0005466030 nova_compute[230518]: 2025-10-02 13:35:06.089 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:35:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:35:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:06.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:35:07 np0005466030 nova_compute[230518]: 2025-10-02 13:35:07.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:08.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:08 np0005466030 podman[328517]: 2025-10-02 13:35:08.605165817 +0000 UTC m=+0.062709147 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:35:08 np0005466030 podman[328516]: 2025-10-02 13:35:08.630203992 +0000 UTC m=+0.097134326 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  2 09:35:08 np0005466030 nova_compute[230518]: 2025-10-02 13:35:08.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:10.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:10.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:12.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:12 np0005466030 nova_compute[230518]: 2025-10-02 13:35:12.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:13 np0005466030 nova_compute[230518]: 2025-10-02 13:35:13.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:14.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:17 np0005466030 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  2 09:35:17 np0005466030 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  2 09:35:17 np0005466030 nova_compute[230518]: 2025-10-02 13:35:17.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:18.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:18.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:18 np0005466030 nova_compute[230518]: 2025-10-02 13:35:18.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:20.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:20 np0005466030 podman[328565]: 2025-10-02 13:35:20.37093841 +0000 UTC m=+0.103261948 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  2 09:35:20 np0005466030 podman[328566]: 2025-10-02 13:35:20.387803579 +0000 UTC m=+0.120356895 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:35:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:22.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:22.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:22 np0005466030 nova_compute[230518]: 2025-10-02 13:35:22.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:23 np0005466030 nova_compute[230518]: 2025-10-02 13:35:23.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:24.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:24.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:35:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:35:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:35:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:35:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:35:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:26.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:27 np0005466030 nova_compute[230518]: 2025-10-02 13:35:27.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:28.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:28.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:28 np0005466030 nova_compute[230518]: 2025-10-02 13:35:28.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:29 np0005466030 systemd-logind[795]: Session 62 logged out. Waiting for processes to exit.
Oct  2 09:35:29 np0005466030 systemd[1]: session-62.scope: Deactivated successfully.
Oct  2 09:35:29 np0005466030 systemd[1]: session-62.scope: Consumed 2min 39.552s CPU time, 1.0G memory peak, read 451.8M from disk, written 296.8M to disk.
Oct  2 09:35:29 np0005466030 systemd-logind[795]: Removed session 62.
Oct  2 09:35:29 np0005466030 systemd-logind[795]: New session 63 of user zuul.
Oct  2 09:35:29 np0005466030 systemd[1]: Started Session 63 of User zuul.
Oct  2 09:35:29 np0005466030 systemd[1]: session-63.scope: Deactivated successfully.
Oct  2 09:35:29 np0005466030 systemd-logind[795]: Session 63 logged out. Waiting for processes to exit.
Oct  2 09:35:29 np0005466030 systemd-logind[795]: Removed session 63.
Oct  2 09:35:29 np0005466030 systemd-logind[795]: New session 64 of user zuul.
Oct  2 09:35:29 np0005466030 systemd[1]: Started Session 64 of User zuul.
Oct  2 09:35:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:29 np0005466030 systemd[1]: session-64.scope: Deactivated successfully.
Oct  2 09:35:29 np0005466030 systemd-logind[795]: Session 64 logged out. Waiting for processes to exit.
Oct  2 09:35:29 np0005466030 systemd-logind[795]: Removed session 64.
Oct  2 09:35:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:32.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:32 np0005466030 nova_compute[230518]: 2025-10-02 13:35:32.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:33 np0005466030 nova_compute[230518]: 2025-10-02 13:35:33.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:34.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:34.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:37 np0005466030 nova_compute[230518]: 2025-10-02 13:35:37.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:38.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:38.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:38 np0005466030 nova_compute[230518]: 2025-10-02 13:35:38.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:38 np0005466030 podman[328665]: 2025-10-02 13:35:38.805078334 +0000 UTC m=+0.056923286 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:35:38 np0005466030 podman[328664]: 2025-10-02 13:35:38.833069701 +0000 UTC m=+0.088407033 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:35:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:35:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 71K writes, 275K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s#012Cumulative WAL: 71K writes, 26K syncs, 2.69 writes per sync, written: 0.27 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1937 writes, 5766 keys, 1937 commit groups, 1.0 writes per commit group, ingest: 5.33 MB, 0.01 MB/s#012Interval WAL: 1937 writes, 854 syncs, 2.27 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:35:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:42.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:42 np0005466030 nova_compute[230518]: 2025-10-02 13:35:42.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:35:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:35:43 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:35:43 np0005466030 nova_compute[230518]: 2025-10-02 13:35:43.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:44.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:46 np0005466030 nova_compute[230518]: 2025-10-02 13:35:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:46.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:46.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:47 np0005466030 nova_compute[230518]: 2025-10-02 13:35:47.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:48.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:48.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:48 np0005466030 nova_compute[230518]: 2025-10-02 13:35:48.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:35:50 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:35:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:50.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:50.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:50 np0005466030 podman[328891]: 2025-10-02 13:35:50.80768189 +0000 UTC m=+0.058436782 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:35:50 np0005466030 podman[328892]: 2025-10-02 13:35:50.81086885 +0000 UTC m=+0.061629502 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:35:51 np0005466030 nova_compute[230518]: 2025-10-02 13:35:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:51 np0005466030 nova_compute[230518]: 2025-10-02 13:35:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.088 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.089 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:35:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:52 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:35:52 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1326945611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.525 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:35:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:52.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.677 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.678 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4136MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.679 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.679 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.980 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:35:52 np0005466030 nova_compute[230518]: 2025-10-02 13:35:52.981 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:35:53 np0005466030 nova_compute[230518]: 2025-10-02 13:35:53.018 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:35:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:35:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2167374719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:35:53 np0005466030 nova_compute[230518]: 2025-10-02 13:35:53.456 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:35:53 np0005466030 nova_compute[230518]: 2025-10-02 13:35:53.462 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:35:53 np0005466030 nova_compute[230518]: 2025-10-02 13:35:53.496 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:35:53 np0005466030 nova_compute[230518]: 2025-10-02 13:35:53.498 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:35:53 np0005466030 nova_compute[230518]: 2025-10-02 13:35:53.498 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:35:53 np0005466030 nova_compute[230518]: 2025-10-02 13:35:53.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:54.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:54.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:35:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:56.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:57 np0005466030 nova_compute[230518]: 2025-10-02 13:35:57.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:35:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:58.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:35:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:35:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:35:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:58.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:35:58 np0005466030 nova_compute[230518]: 2025-10-02 13:35:58.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:35:59 np0005466030 nova_compute[230518]: 2025-10-02 13:35:59.499 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:35:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:00.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:01 np0005466030 nova_compute[230518]: 2025-10-02 13:36:01.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:01 np0005466030 nova_compute[230518]: 2025-10-02 13:36:01.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:02 np0005466030 nova_compute[230518]: 2025-10-02 13:36:02.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:02.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:02.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:02 np0005466030 nova_compute[230518]: 2025-10-02 13:36:02.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:03 np0005466030 nova_compute[230518]: 2025-10-02 13:36:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:03 np0005466030 nova_compute[230518]: 2025-10-02 13:36:03.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:04.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:06 np0005466030 nova_compute[230518]: 2025-10-02 13:36:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:06 np0005466030 nova_compute[230518]: 2025-10-02 13:36:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:36:06 np0005466030 nova_compute[230518]: 2025-10-02 13:36:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:36:06 np0005466030 nova_compute[230518]: 2025-10-02 13:36:06.074 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:36:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:06.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:07 np0005466030 nova_compute[230518]: 2025-10-02 13:36:07.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:08.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:08 np0005466030 nova_compute[230518]: 2025-10-02 13:36:08.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:09 np0005466030 podman[328975]: 2025-10-02 13:36:09.825871137 +0000 UTC m=+0.075608111 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:36:09 np0005466030 podman[328974]: 2025-10-02 13:36:09.833491035 +0000 UTC m=+0.084853401 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Oct  2 09:36:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:10.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:36:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 17K writes, 90K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1516 writes, 7694 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 16.13 MB, 0.03 MB/s#012Interval WAL: 1516 writes, 1516 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     58.6      1.91              0.33        59    0.032       0      0       0.0       0.0#012  L6      1/0   10.84 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    116.4     99.9      6.10              1.85        58    0.105    457K    31K       0.0       0.0#012 Sum      1/0   10.84 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     88.7     90.1      8.01              2.18       117    0.068    457K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     81.4     81.3      0.84              0.17        10    0.084     55K   2527       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    116.4     99.9      6.10              1.85        58    0.105    457K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     58.7      1.91              0.33        58    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.109, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.70 GB write, 0.11 MB/s write, 0.69 GB read, 0.11 MB/s read, 8.0 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 78.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000555 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4885,75.15 MB,24.7193%) FilterBlock(117,1.23 MB,0.404895%) IndexBlock(117,2.03 MB,0.66694%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:36:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:10.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:12.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:12.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:12 np0005466030 nova_compute[230518]: 2025-10-02 13:36:12.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:13 np0005466030 nova_compute[230518]: 2025-10-02 13:36:13.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:14.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:14.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:16.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:36:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:16.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:36:17 np0005466030 nova_compute[230518]: 2025-10-02 13:36:17.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:18.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:18 np0005466030 nova_compute[230518]: 2025-10-02 13:36:18.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:20.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:21 np0005466030 podman[329020]: 2025-10-02 13:36:21.822303425 +0000 UTC m=+0.064894566 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:36:21 np0005466030 podman[329019]: 2025-10-02 13:36:21.841989142 +0000 UTC m=+0.097631892 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:36:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:36:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:22.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:36:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:22.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:22 np0005466030 nova_compute[230518]: 2025-10-02 13:36:22.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:23 np0005466030 nova_compute[230518]: 2025-10-02 13:36:23.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:24.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:24.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:36:25.996 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:36:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:36:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:36:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:36:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:36:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:26.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:26.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:27 np0005466030 nova_compute[230518]: 2025-10-02 13:36:27.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:28.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:28.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:28 np0005466030 nova_compute[230518]: 2025-10-02 13:36:28.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:30.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:30.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:32.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:32 np0005466030 nova_compute[230518]: 2025-10-02 13:36:32.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:32.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:33 np0005466030 nova_compute[230518]: 2025-10-02 13:36:33.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:34.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:36.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:37 np0005466030 nova_compute[230518]: 2025-10-02 13:36:37.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:38.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:38 np0005466030 nova_compute[230518]: 2025-10-02 13:36:38.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:40.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:40.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:40 np0005466030 podman[329061]: 2025-10-02 13:36:40.80004156 +0000 UTC m=+0.047673946 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:36:40 np0005466030 podman[329060]: 2025-10-02 13:36:40.822335059 +0000 UTC m=+0.076566521 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 09:36:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:42.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:42 np0005466030 nova_compute[230518]: 2025-10-02 13:36:42.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:42.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:43 np0005466030 nova_compute[230518]: 2025-10-02 13:36:43.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:44.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:44.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:46.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:46.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:47 np0005466030 nova_compute[230518]: 2025-10-02 13:36:47.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:48 np0005466030 nova_compute[230518]: 2025-10-02 13:36:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:48.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:48.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:48 np0005466030 nova_compute[230518]: 2025-10-02 13:36:48.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:51 np0005466030 nova_compute[230518]: 2025-10-02 13:36:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:51 np0005466030 nova_compute[230518]: 2025-10-02 13:36:51.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:36:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:36:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:36:51 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:36:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:52 np0005466030 nova_compute[230518]: 2025-10-02 13:36:52.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:52 np0005466030 podman[329232]: 2025-10-02 13:36:52.797257529 +0000 UTC m=+0.050278167 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:36:52 np0005466030 podman[329233]: 2025-10-02 13:36:52.81321324 +0000 UTC m=+0.059151356 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.086 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.086 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:36:53 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:36:53 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2432501427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.490 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.628 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.629 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4151MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.924 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:36:53 np0005466030 nova_compute[230518]: 2025-10-02 13:36:53.925 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.154 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.178 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.178 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.213 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.248 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:36:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:36:54 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1900774806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.687 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.692 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.747 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.748 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:36:54 np0005466030 nova_compute[230518]: 2025-10-02 13:36:54.748 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:36:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:54.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:36:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:56.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:56.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:36:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:36:57 np0005466030 nova_compute[230518]: 2025-10-02 13:36:57.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:36:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:58.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:36:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:36:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:36:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:58.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:36:58 np0005466030 nova_compute[230518]: 2025-10-02 13:36:58.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:36:59 np0005466030 nova_compute[230518]: 2025-10-02 13:36:59.748 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:36:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:02 np0005466030 nova_compute[230518]: 2025-10-02 13:37:02.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:02 np0005466030 nova_compute[230518]: 2025-10-02 13:37:02.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:03 np0005466030 nova_compute[230518]: 2025-10-02 13:37:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:03 np0005466030 nova_compute[230518]: 2025-10-02 13:37:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:03 np0005466030 nova_compute[230518]: 2025-10-02 13:37:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:03 np0005466030 nova_compute[230518]: 2025-10-02 13:37:03.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:04.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:04 np0005466030 nova_compute[230518]: 2025-10-02 13:37:04.721 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:37:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:37:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:37:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:37:06 np0005466030 nova_compute[230518]: 2025-10-02 13:37:06.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:06 np0005466030 nova_compute[230518]: 2025-10-02 13:37:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:37:06 np0005466030 nova_compute[230518]: 2025-10-02 13:37:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:37:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:06.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:06 np0005466030 nova_compute[230518]: 2025-10-02 13:37:06.909 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:37:07 np0005466030 nova_compute[230518]: 2025-10-02 13:37:07.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:08.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:08.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:08 np0005466030 nova_compute[230518]: 2025-10-02 13:37:08.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.949831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228949861, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1902, "num_deletes": 256, "total_data_size": 4369363, "memory_usage": 4436848, "flush_reason": "Manual Compaction"}
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228966600, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 2863530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89806, "largest_seqno": 91703, "table_properties": {"data_size": 2855298, "index_size": 4917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18571, "raw_average_key_size": 20, "raw_value_size": 2838286, "raw_average_value_size": 3174, "num_data_blocks": 216, "num_entries": 894, "num_filter_entries": 894, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412069, "oldest_key_time": 1759412069, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 16815 microseconds, and 7726 cpu microseconds.
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.966642) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 2863530 bytes OK
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.966662) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.968641) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.968658) EVENT_LOG_v1 {"time_micros": 1759412228968652, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.968676) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 4360377, prev total WAL file size 4360377, number of live WAL files 2.
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.969944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353230' seq:72057594037927935, type:22 .. '6C6F676D0033373733' seq:0, type:0; will stop at (end)
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(2796KB)], [186(10MB)]
Oct  2 09:37:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228969986, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 14232544, "oldest_snapshot_seqno": -1}
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11350 keys, 14109957 bytes, temperature: kUnknown
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229054325, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 14109957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14037321, "index_size": 43147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 300255, "raw_average_key_size": 26, "raw_value_size": 13839797, "raw_average_value_size": 1219, "num_data_blocks": 1641, "num_entries": 11350, "num_filter_entries": 11350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.054752) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14109957 bytes
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.057735) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 167.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 11875, records dropped: 525 output_compression: NoCompression
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.057765) EVENT_LOG_v1 {"time_micros": 1759412229057751, "job": 120, "event": "compaction_finished", "compaction_time_micros": 84512, "compaction_time_cpu_micros": 44404, "output_level": 6, "num_output_files": 1, "total_output_size": 14109957, "num_input_records": 11875, "num_output_records": 11350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229059163, "job": 120, "event": "table_file_deletion", "file_number": 188}
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229063335, "job": 120, "event": "table_file_deletion", "file_number": 186}
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.969799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:37:09 np0005466030 nova_compute[230518]: 2025-10-02 13:37:09.903 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:10.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:10.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:11 np0005466030 podman[329365]: 2025-10-02 13:37:11.800592577 +0000 UTC m=+0.051402783 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  2 09:37:11 np0005466030 podman[329364]: 2025-10-02 13:37:11.814533663 +0000 UTC m=+0.070842662 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:37:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:12 np0005466030 nova_compute[230518]: 2025-10-02 13:37:12.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:12.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:13 np0005466030 nova_compute[230518]: 2025-10-02 13:37:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:13 np0005466030 nova_compute[230518]: 2025-10-02 13:37:13.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:37:13 np0005466030 nova_compute[230518]: 2025-10-02 13:37:13.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:14 np0005466030 nova_compute[230518]: 2025-10-02 13:37:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:37:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:37:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:14.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:16.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:17 np0005466030 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  2 09:37:17 np0005466030 nova_compute[230518]: 2025-10-02 13:37:17.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:18 np0005466030 nova_compute[230518]: 2025-10-02 13:37:18.783 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:18 np0005466030 nova_compute[230518]: 2025-10-02 13:37:18.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:20.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:22 np0005466030 nova_compute[230518]: 2025-10-02 13:37:22.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:23 np0005466030 podman[329409]: 2025-10-02 13:37:23.801070106 +0000 UTC m=+0.055663517 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:37:23 np0005466030 nova_compute[230518]: 2025-10-02 13:37:23.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:23 np0005466030 podman[329408]: 2025-10-02 13:37:23.852061704 +0000 UTC m=+0.101246375 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid)
Oct  2 09:37:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:37:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:37:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:37:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:37:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:37:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:37:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:26.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:27 np0005466030 nova_compute[230518]: 2025-10-02 13:37:27.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:28.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:28 np0005466030 nova_compute[230518]: 2025-10-02 13:37:28.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:28.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:29 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:30.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:32.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:32 np0005466030 nova_compute[230518]: 2025-10-02 13:37:32.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:32.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:33 np0005466030 nova_compute[230518]: 2025-10-02 13:37:33.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:34.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:34.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:34 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:36.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:36.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:37 np0005466030 nova_compute[230518]: 2025-10-02 13:37:37.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:38.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:38 np0005466030 nova_compute[230518]: 2025-10-02 13:37:38.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:38.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:39 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:40.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:40.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:42 np0005466030 nova_compute[230518]: 2025-10-02 13:37:42.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:42 np0005466030 podman[329448]: 2025-10-02 13:37:42.807269232 +0000 UTC m=+0.054902513 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:37:42 np0005466030 podman[329447]: 2025-10-02 13:37:42.839510412 +0000 UTC m=+0.085685987 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:37:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:42.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:43 np0005466030 nova_compute[230518]: 2025-10-02 13:37:43.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:44.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:44.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:44 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:46.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:46.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:47 np0005466030 nova_compute[230518]: 2025-10-02 13:37:47.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:48 np0005466030 nova_compute[230518]: 2025-10-02 13:37:48.090 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:48.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:48 np0005466030 nova_compute[230518]: 2025-10-02 13:37:48.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:48.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:49 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:50.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:50.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:51 np0005466030 nova_compute[230518]: 2025-10-02 13:37:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:51 np0005466030 nova_compute[230518]: 2025-10-02 13:37:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:37:51 np0005466030 nova_compute[230518]: 2025-10-02 13:37:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:51 np0005466030 nova_compute[230518]: 2025-10-02 13:37:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:37:51 np0005466030 nova_compute[230518]: 2025-10-02 13:37:51.088 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:37:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:52.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:52 np0005466030 nova_compute[230518]: 2025-10-02 13:37:52.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:52.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:53 np0005466030 nova_compute[230518]: 2025-10-02 13:37:53.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:37:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:54.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:37:54 np0005466030 podman[329490]: 2025-10-02 13:37:54.801341073 +0000 UTC m=+0.050892797 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:37:54 np0005466030 podman[329491]: 2025-10-02 13:37:54.85614436 +0000 UTC m=+0.089647991 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:37:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:54 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.122 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.122 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.123 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.123 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:37:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:37:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/439213855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.590 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.813 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.816 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4131MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.816 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.817 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.906 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.907 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:37:55 np0005466030 nova_compute[230518]: 2025-10-02 13:37:55.941 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:37:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:37:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3149066955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:37:56 np0005466030 nova_compute[230518]: 2025-10-02 13:37:56.416 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:37:56 np0005466030 nova_compute[230518]: 2025-10-02 13:37:56.421 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:37:56 np0005466030 nova_compute[230518]: 2025-10-02 13:37:56.447 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:37:56 np0005466030 nova_compute[230518]: 2025-10-02 13:37:56.449 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:37:56 np0005466030 nova_compute[230518]: 2025-10-02 13:37:56.449 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:37:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:56.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:57 np0005466030 nova_compute[230518]: 2025-10-02 13:37:57.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:58.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:58 np0005466030 nova_compute[230518]: 2025-10-02 13:37:58.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:37:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:37:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:37:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:58.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:37:59 np0005466030 nova_compute[230518]: 2025-10-02 13:37:59.414 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:37:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:00 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:00.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:38:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:01 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:38:02 np0005466030 nova_compute[230518]: 2025-10-02 13:38:02.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:02 np0005466030 nova_compute[230518]: 2025-10-02 13:38:02.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:03 np0005466030 nova_compute[230518]: 2025-10-02 13:38:03.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:04 np0005466030 nova_compute[230518]: 2025-10-02 13:38:04.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:04.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:04.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:04 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:05 np0005466030 nova_compute[230518]: 2025-10-02 13:38:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:05 np0005466030 nova_compute[230518]: 2025-10-02 13:38:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:06.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:07 np0005466030 nova_compute[230518]: 2025-10-02 13:38:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:07 np0005466030 nova_compute[230518]: 2025-10-02 13:38:07.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:38:07 np0005466030 nova_compute[230518]: 2025-10-02 13:38:07.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:38:07 np0005466030 nova_compute[230518]: 2025-10-02 13:38:07.123 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:38:07 np0005466030 nova_compute[230518]: 2025-10-02 13:38:07.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:08.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.582010) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288582095, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 251, "total_data_size": 1571607, "memory_usage": 1591816, "flush_reason": "Manual Compaction"}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288590344, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 1037761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91708, "largest_seqno": 92517, "table_properties": {"data_size": 1033885, "index_size": 1655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8768, "raw_average_key_size": 19, "raw_value_size": 1026112, "raw_average_value_size": 2295, "num_data_blocks": 73, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412229, "oldest_key_time": 1759412229, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 8350 microseconds, and 3897 cpu microseconds.
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.590381) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 1037761 bytes OK
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.590402) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.591540) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.591553) EVENT_LOG_v1 {"time_micros": 1759412288591549, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.591572) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1567362, prev total WAL file size 1567362, number of live WAL files 2.
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592237) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(1013KB)], [189(13MB)]
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288592374, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 15147718, "oldest_snapshot_seqno": -1}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11281 keys, 13208412 bytes, temperature: kUnknown
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288700824, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 13208412, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13137075, "index_size": 42040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299502, "raw_average_key_size": 26, "raw_value_size": 12941483, "raw_average_value_size": 1147, "num_data_blocks": 1589, "num_entries": 11281, "num_filter_entries": 11281, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.701107) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13208412 bytes
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.702955) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.6 rd, 121.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.5 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(27.3) write-amplify(12.7) OK, records in: 11797, records dropped: 516 output_compression: NoCompression
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.702971) EVENT_LOG_v1 {"time_micros": 1759412288702963, "job": 122, "event": "compaction_finished", "compaction_time_micros": 108522, "compaction_time_cpu_micros": 61472, "output_level": 6, "num_output_files": 1, "total_output_size": 13208412, "num_input_records": 11797, "num_output_records": 11281, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288703294, "job": 122, "event": "table_file_deletion", "file_number": 191}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288705797, "job": 122, "event": "table_file_deletion", "file_number": 189}
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:38:08 np0005466030 nova_compute[230518]: 2025-10-02 13:38:08.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:09 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:38:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:10.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:12.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:12 np0005466030 nova_compute[230518]: 2025-10-02 13:38:12.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:13 np0005466030 podman[329754]: 2025-10-02 13:38:13.796582408 +0000 UTC m=+0.050017810 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  2 09:38:13 np0005466030 nova_compute[230518]: 2025-10-02 13:38:13.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:13 np0005466030 podman[329753]: 2025-10-02 13:38:13.833692711 +0000 UTC m=+0.089728885 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:38:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:38:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:38:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:16.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:17 np0005466030 nova_compute[230518]: 2025-10-02 13:38:17.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:18.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:18 np0005466030 nova_compute[230518]: 2025-10-02 13:38:18.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:18.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:20.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:20.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:22.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:22 np0005466030 nova_compute[230518]: 2025-10-02 13:38:22.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:22.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:23 np0005466030 nova_compute[230518]: 2025-10-02 13:38:23.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:24.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:24.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:24 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:25 np0005466030 podman[329798]: 2025-10-02 13:38:25.829093123 +0000 UTC m=+0.078467901 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 09:38:25 np0005466030 podman[329799]: 2025-10-02 13:38:25.847070606 +0000 UTC m=+0.075798367 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Oct  2 09:38:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:38:25.998 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:38:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:38:25.998 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:38:25 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:38:25.998 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:38:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:38:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:26.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:38:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:26.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:27 np0005466030 nova_compute[230518]: 2025-10-02 13:38:27.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:28.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:28 np0005466030 nova_compute[230518]: 2025-10-02 13:38:28.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:38:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:38:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:32.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:32 np0005466030 nova_compute[230518]: 2025-10-02 13:38:32.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:33 np0005466030 nova_compute[230518]: 2025-10-02 13:38:33.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:34.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:36.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:36.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:37 np0005466030 nova_compute[230518]: 2025-10-02 13:38:37.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:38 np0005466030 nova_compute[230518]: 2025-10-02 13:38:38.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:38.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:40.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:42.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:42 np0005466030 nova_compute[230518]: 2025-10-02 13:38:42.725 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:38:42 np0005466030 nova_compute[230518]: 2025-10-02 13:38:42.769 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:38:42 np0005466030 nova_compute[230518]: 2025-10-02 13:38:42.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:43 np0005466030 nova_compute[230518]: 2025-10-02 13:38:43.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:44.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:44 np0005466030 podman[329842]: 2025-10-02 13:38:44.797333111 +0000 UTC m=+0.044957060 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  2 09:38:44 np0005466030 podman[329841]: 2025-10-02 13:38:44.826045961 +0000 UTC m=+0.075908341 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct  2 09:38:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:46.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:46.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:47 np0005466030 nova_compute[230518]: 2025-10-02 13:38:47.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:48 np0005466030 nova_compute[230518]: 2025-10-02 13:38:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:48.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:48 np0005466030 nova_compute[230518]: 2025-10-02 13:38:48.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:48.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:50.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:38:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:50.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:38:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:38:51.510 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  2 09:38:51 np0005466030 nova_compute[230518]: 2025-10-02 13:38:51.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:38:51.511 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  2 09:38:51 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:38:51.512 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  2 09:38:52 np0005466030 nova_compute[230518]: 2025-10-02 13:38:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:52 np0005466030 nova_compute[230518]: 2025-10-02 13:38:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:38:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:52 np0005466030 nova_compute[230518]: 2025-10-02 13:38:52.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:52.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:53 np0005466030 nova_compute[230518]: 2025-10-02 13:38:53.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:54.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:54.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:38:55 np0005466030 nova_compute[230518]: 2025-10-02 13:38:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:38:55 np0005466030 nova_compute[230518]: 2025-10-02 13:38:55.460 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:38:55 np0005466030 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:38:55 np0005466030 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:38:55 np0005466030 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:38:55 np0005466030 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:38:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:38:55 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2831501368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:38:55 np0005466030 nova_compute[230518]: 2025-10-02 13:38:55.871 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.012 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.013 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4136MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.013 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.014 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.114 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.114 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.134 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:38:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:38:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3151770201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:38:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.555 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.560 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.586 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.588 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:38:56 np0005466030 nova_compute[230518]: 2025-10-02 13:38:56.589 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:38:56 np0005466030 podman[329933]: 2025-10-02 13:38:56.802982601 +0000 UTC m=+0.055337565 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  2 09:38:56 np0005466030 podman[329932]: 2025-10-02 13:38:56.814909805 +0000 UTC m=+0.068403705 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:38:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:56.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:57 np0005466030 nova_compute[230518]: 2025-10-02 13:38:57.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:38:58 np0005466030 nova_compute[230518]: 2025-10-02 13:38:58.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:38:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:38:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:38:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:58.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:00.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:01 np0005466030 nova_compute[230518]: 2025-10-02 13:39:01.589 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:02 np0005466030 nova_compute[230518]: 2025-10-02 13:39:02.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:02.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:03 np0005466030 nova_compute[230518]: 2025-10-02 13:39:03.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:03 np0005466030 nova_compute[230518]: 2025-10-02 13:39:03.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:04.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:05 np0005466030 nova_compute[230518]: 2025-10-02 13:39:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:05 np0005466030 nova_compute[230518]: 2025-10-02 13:39:05.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:39:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:39:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:39:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:39:06 np0005466030 nova_compute[230518]: 2025-10-02 13:39:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:07 np0005466030 nova_compute[230518]: 2025-10-02 13:39:07.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:08 np0005466030 nova_compute[230518]: 2025-10-02 13:39:08.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:08.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:09 np0005466030 nova_compute[230518]: 2025-10-02 13:39:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:09 np0005466030 nova_compute[230518]: 2025-10-02 13:39:09.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:39:09 np0005466030 nova_compute[230518]: 2025-10-02 13:39:09.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:39:09 np0005466030 nova_compute[230518]: 2025-10-02 13:39:09.077 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:39:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:10 np0005466030 nova_compute[230518]: 2025-10-02 13:39:10.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:39:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:39:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:39:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:10.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:10.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:12.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:12 np0005466030 nova_compute[230518]: 2025-10-02 13:39:12.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:12.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:13 np0005466030 nova_compute[230518]: 2025-10-02 13:39:13.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:14.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:15 np0005466030 podman[330106]: 2025-10-02 13:39:15.790091032 +0000 UTC m=+0.042759632 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:39:15 np0005466030 podman[330105]: 2025-10-02 13:39:15.822223829 +0000 UTC m=+0.077866723 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  2 09:39:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:16.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:16.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:17 np0005466030 nova_compute[230518]: 2025-10-02 13:39:17.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:39:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:39:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:18 np0005466030 nova_compute[230518]: 2025-10-02 13:39:18.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:18.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:20.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:20.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:22 np0005466030 nova_compute[230518]: 2025-10-02 13:39:22.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:22.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:23 np0005466030 nova_compute[230518]: 2025-10-02 13:39:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:25.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:39:26.000 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:39:26.001 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:39:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:39:26.001 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:39:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:27.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:27 np0005466030 podman[330199]: 2025-10-02 13:39:27.792181224 +0000 UTC m=+0.045309822 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  2 09:39:27 np0005466030 nova_compute[230518]: 2025-10-02 13:39:27.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:27 np0005466030 podman[330198]: 2025-10-02 13:39:27.820078519 +0000 UTC m=+0.076301944 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  2 09:39:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:28.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:28 np0005466030 nova_compute[230518]: 2025-10-02 13:39:28.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:29.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:30.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:31.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:32.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:32 np0005466030 nova_compute[230518]: 2025-10-02 13:39:32.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:33.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:33 np0005466030 nova_compute[230518]: 2025-10-02 13:39:33.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:34.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:35.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:36.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:37 np0005466030 nova_compute[230518]: 2025-10-02 13:39:37.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:38.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:38 np0005466030 nova_compute[230518]: 2025-10-02 13:39:38.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:40.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:41.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:42.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:42 np0005466030 nova_compute[230518]: 2025-10-02 13:39:42.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:43.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:43 np0005466030 nova_compute[230518]: 2025-10-02 13:39:43.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:44.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:45.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:46.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:46 np0005466030 podman[330240]: 2025-10-02 13:39:46.793076095 +0000 UTC m=+0.046286562 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:39:46 np0005466030 podman[330239]: 2025-10-02 13:39:46.813729463 +0000 UTC m=+0.070639366 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  2 09:39:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:47.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:47 np0005466030 nova_compute[230518]: 2025-10-02 13:39:47.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:48 np0005466030 nova_compute[230518]: 2025-10-02 13:39:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:48.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:48 np0005466030 nova_compute[230518]: 2025-10-02 13:39:48.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:49.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:50.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:51.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:52 np0005466030 nova_compute[230518]: 2025-10-02 13:39:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:52 np0005466030 nova_compute[230518]: 2025-10-02 13:39:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:39:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:52.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:52 np0005466030 nova_compute[230518]: 2025-10-02 13:39:52.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:53.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:53 np0005466030 nova_compute[230518]: 2025-10-02 13:39:53.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:54.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:39:55 np0005466030 nova_compute[230518]: 2025-10-02 13:39:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:39:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:55.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:55 np0005466030 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:39:55 np0005466030 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:39:55 np0005466030 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:39:55 np0005466030 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:39:55 np0005466030 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:39:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:39:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2950983091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.056 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.210 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.211 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4130MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.212 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.212 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.368 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.369 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.391 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:39:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:39:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3887751941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.811 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.816 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.838 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.839 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:39:56 np0005466030 nova_compute[230518]: 2025-10-02 13:39:56.840 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:39:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:39:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:57.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:39:57 np0005466030 nova_compute[230518]: 2025-10-02 13:39:57.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:39:58 np0005466030 podman[330330]: 2025-10-02 13:39:58.79906943 +0000 UTC m=+0.053427836 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:39:58 np0005466030 podman[330329]: 2025-10-02 13:39:58.823043501 +0000 UTC m=+0.080703620 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct  2 09:39:58 np0005466030 nova_compute[230518]: 2025-10-02 13:39:58.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:39:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:39:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:39:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:00 np0005466030 ceph-mon[80926]: overall HEALTH_OK
Oct  2 09:40:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:01.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:02 np0005466030 nova_compute[230518]: 2025-10-02 13:40:02.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:02 np0005466030 nova_compute[230518]: 2025-10-02 13:40:02.842 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:03.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:03 np0005466030 nova_compute[230518]: 2025-10-02 13:40:03.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:04.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:05 np0005466030 nova_compute[230518]: 2025-10-02 13:40:05.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:40:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:05.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:40:06 np0005466030 nova_compute[230518]: 2025-10-02 13:40:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:06 np0005466030 nova_compute[230518]: 2025-10-02 13:40:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:06.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:07 np0005466030 nova_compute[230518]: 2025-10-02 13:40:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:07.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:07 np0005466030 nova_compute[230518]: 2025-10-02 13:40:07.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:08 np0005466030 nova_compute[230518]: 2025-10-02 13:40:08.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:09 np0005466030 nova_compute[230518]: 2025-10-02 13:40:09.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:09 np0005466030 nova_compute[230518]: 2025-10-02 13:40:09.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:40:09 np0005466030 nova_compute[230518]: 2025-10-02 13:40:09.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:40:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:09.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:09 np0005466030 nova_compute[230518]: 2025-10-02 13:40:09.075 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:40:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:10.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:12 np0005466030 nova_compute[230518]: 2025-10-02 13:40:12.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:13.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:13 np0005466030 nova_compute[230518]: 2025-10-02 13:40:13.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:15.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:17.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:17 np0005466030 podman[330392]: 2025-10-02 13:40:17.713209682 +0000 UTC m=+0.048465080 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:40:17 np0005466030 podman[330391]: 2025-10-02 13:40:17.736953136 +0000 UTC m=+0.075627941 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  2 09:40:17 np0005466030 nova_compute[230518]: 2025-10-02 13:40:17.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:18 np0005466030 nova_compute[230518]: 2025-10-02 13:40:18.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:19.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:40:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:40:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:40:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:22 np0005466030 nova_compute[230518]: 2025-10-02 13:40:22.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:23.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:23 np0005466030 nova_compute[230518]: 2025-10-02 13:40:23.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:25.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:40:26.002 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:40:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:40:26.003 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:40:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:40:26.003 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:40:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:26.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:27 np0005466030 nova_compute[230518]: 2025-10-02 13:40:27.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:28.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:28 np0005466030 nova_compute[230518]: 2025-10-02 13:40:28.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:29 np0005466030 podman[330571]: 2025-10-02 13:40:29.168481371 +0000 UTC m=+0.081475645 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  2 09:40:29 np0005466030 podman[330570]: 2025-10-02 13:40:29.173388554 +0000 UTC m=+0.082317961 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:40:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:40:29 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:40:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:30.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:31.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:32.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:32 np0005466030 nova_compute[230518]: 2025-10-02 13:40:32.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:33.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:33 np0005466030 nova_compute[230518]: 2025-10-02 13:40:33.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:34.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:35.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:36.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:37.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:37 np0005466030 nova_compute[230518]: 2025-10-02 13:40:37.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:38.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:38 np0005466030 nova_compute[230518]: 2025-10-02 13:40:38.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:42 np0005466030 nova_compute[230518]: 2025-10-02 13:40:42.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:43.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:43 np0005466030 nova_compute[230518]: 2025-10-02 13:40:43.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:45.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:47 np0005466030 podman[330634]: 2025-10-02 13:40:47.790138933 +0000 UTC m=+0.045355243 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:40:47 np0005466030 nova_compute[230518]: 2025-10-02 13:40:47.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:47 np0005466030 podman[330653]: 2025-10-02 13:40:47.890942483 +0000 UTC m=+0.074670132 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:40:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:48 np0005466030 nova_compute[230518]: 2025-10-02 13:40:48.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:49.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:50 np0005466030 nova_compute[230518]: 2025-10-02 13:40:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:51.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:52 np0005466030 nova_compute[230518]: 2025-10-02 13:40:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:52 np0005466030 nova_compute[230518]: 2025-10-02 13:40:52.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:40:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:52 np0005466030 nova_compute[230518]: 2025-10-02 13:40:52.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:53 np0005466030 nova_compute[230518]: 2025-10-02 13:40:53.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:40:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:55.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:40:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:40:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/630243333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.514 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.656 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.657 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4148MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.657 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.657 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:40:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.817 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.817 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:40:56 np0005466030 nova_compute[230518]: 2025-10-02 13:40:56.880 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:40:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:57.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:40:57 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3725250433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:40:57 np0005466030 nova_compute[230518]: 2025-10-02 13:40:57.358 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:40:57 np0005466030 nova_compute[230518]: 2025-10-02 13:40:57.364 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:40:57 np0005466030 nova_compute[230518]: 2025-10-02 13:40:57.384 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:40:57 np0005466030 nova_compute[230518]: 2025-10-02 13:40:57.385 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:40:57 np0005466030 nova_compute[230518]: 2025-10-02 13:40:57.386 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:40:57 np0005466030 nova_compute[230518]: 2025-10-02 13:40:57.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:40:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:40:58 np0005466030 nova_compute[230518]: 2025-10-02 13:40:58.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:40:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:40:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:40:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:40:59 np0005466030 podman[330723]: 2025-10-02 13:40:59.800065329 +0000 UTC m=+0.053373944 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  2 09:40:59 np0005466030 podman[330724]: 2025-10-02 13:40:59.806895594 +0000 UTC m=+0.052677693 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd)
Oct  2 09:41:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:00.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:02.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:02 np0005466030 nova_compute[230518]: 2025-10-02 13:41:02.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:03 np0005466030 nova_compute[230518]: 2025-10-02 13:41:03.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:04 np0005466030 nova_compute[230518]: 2025-10-02 13:41:04.386 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:04.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:41:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:41:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:41:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:41:06 np0005466030 nova_compute[230518]: 2025-10-02 13:41:06.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:07 np0005466030 nova_compute[230518]: 2025-10-02 13:41:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:07 np0005466030 nova_compute[230518]: 2025-10-02 13:41:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:07 np0005466030 nova_compute[230518]: 2025-10-02 13:41:07.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:08 np0005466030 nova_compute[230518]: 2025-10-02 13:41:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:08.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:08 np0005466030 nova_compute[230518]: 2025-10-02 13:41:08.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:10 np0005466030 nova_compute[230518]: 2025-10-02 13:41:10.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:10 np0005466030 nova_compute[230518]: 2025-10-02 13:41:10.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:10 np0005466030 nova_compute[230518]: 2025-10-02 13:41:10.062 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:41:10 np0005466030 nova_compute[230518]: 2025-10-02 13:41:10.062 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:41:10 np0005466030 nova_compute[230518]: 2025-10-02 13:41:10.078 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:41:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:12 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:41:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:12 np0005466030 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  2 09:41:12 np0005466030 nova_compute[230518]: 2025-10-02 13:41:12.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:13 np0005466030 nova_compute[230518]: 2025-10-02 13:41:13.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:17 np0005466030 nova_compute[230518]: 2025-10-02 13:41:17.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:18 np0005466030 podman[330768]: 2025-10-02 13:41:18.831967406 +0000 UTC m=+0.089634972 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:41:18 np0005466030 podman[330769]: 2025-10-02 13:41:18.833688589 +0000 UTC m=+0.074192037 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  2 09:41:18 np0005466030 nova_compute[230518]: 2025-10-02 13:41:18.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:19.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:21.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:22 np0005466030 nova_compute[230518]: 2025-10-02 13:41:22.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:23.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:23 np0005466030 nova_compute[230518]: 2025-10-02 13:41:23.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:41:26.004 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:41:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:41:26.004 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:41:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:41:26.004 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:41:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:27.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:27 np0005466030 nova_compute[230518]: 2025-10-02 13:41:27.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:28 np0005466030 nova_compute[230518]: 2025-10-02 13:41:28.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:29.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:30 np0005466030 podman[330958]: 2025-10-02 13:41:30.283552539 +0000 UTC m=+0.053662214 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:41:30 np0005466030 podman[330957]: 2025-10-02 13:41:30.309239565 +0000 UTC m=+0.081356092 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:41:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:31.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:41:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:31 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:41:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:32 np0005466030 nova_compute[230518]: 2025-10-02 13:41:32.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:33.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:33 np0005466030 nova_compute[230518]: 2025-10-02 13:41:33.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:37 np0005466030 nova_compute[230518]: 2025-10-02 13:41:37.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:37 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:41:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:38.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:38 np0005466030 nova_compute[230518]: 2025-10-02 13:41:38.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:39.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:42 np0005466030 nova_compute[230518]: 2025-10-02 13:41:42.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:41:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:41:43 np0005466030 nova_compute[230518]: 2025-10-02 13:41:43.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:41:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:41:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:47.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:47 np0005466030 nova_compute[230518]: 2025-10-02 13:41:47.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:48.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:48 np0005466030 nova_compute[230518]: 2025-10-02 13:41:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:49.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:49 np0005466030 podman[331154]: 2025-10-02 13:41:49.818162086 +0000 UTC m=+0.061136818 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:41:49 np0005466030 podman[331153]: 2025-10-02 13:41:49.86808442 +0000 UTC m=+0.119454626 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  2 09:41:50 np0005466030 nova_compute[230518]: 2025-10-02 13:41:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:51.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:52.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:52 np0005466030 nova_compute[230518]: 2025-10-02 13:41:52.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:53.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:53 np0005466030 nova_compute[230518]: 2025-10-02 13:41:53.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:54 np0005466030 nova_compute[230518]: 2025-10-02 13:41:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:54 np0005466030 nova_compute[230518]: 2025-10-02 13:41:54.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:41:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:54.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.102495) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515102573, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2331, "num_deletes": 251, "total_data_size": 5938842, "memory_usage": 6022112, "flush_reason": "Manual Compaction"}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515158923, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 3880001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92522, "largest_seqno": 94848, "table_properties": {"data_size": 3870469, "index_size": 6089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18979, "raw_average_key_size": 20, "raw_value_size": 3851624, "raw_average_value_size": 4101, "num_data_blocks": 267, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412288, "oldest_key_time": 1759412288, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 56477 microseconds, and 8051 cpu microseconds.
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:41:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:55.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.158979) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 3880001 bytes OK
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.159001) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.229436) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.229482) EVENT_LOG_v1 {"time_micros": 1759412515229472, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.229505) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 5928574, prev total WAL file size 5928574, number of live WAL files 2.
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.231414) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(3789KB)], [192(12MB)]
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515231451, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 17088413, "oldest_snapshot_seqno": -1}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11703 keys, 15047767 bytes, temperature: kUnknown
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515370391, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 15047767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14972204, "index_size": 45178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 308911, "raw_average_key_size": 26, "raw_value_size": 14767755, "raw_average_value_size": 1261, "num_data_blocks": 1719, "num_entries": 11703, "num_filter_entries": 11703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.370821) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 15047767 bytes
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.415167) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.9 rd, 108.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.6 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 12220, records dropped: 517 output_compression: NoCompression
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.415230) EVENT_LOG_v1 {"time_micros": 1759412515415207, "job": 124, "event": "compaction_finished", "compaction_time_micros": 139094, "compaction_time_cpu_micros": 33699, "output_level": 6, "num_output_files": 1, "total_output_size": 15047767, "num_input_records": 12220, "num_output_records": 11703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515416991, "job": 124, "event": "table_file_deletion", "file_number": 194}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515419509, "job": 124, "event": "table_file_deletion", "file_number": 192}
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.231330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:55 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.087 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:41:56 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:41:56 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/96427201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.557 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.693 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.694 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4153MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.695 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.695 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.773 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:41:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:41:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:56.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.846 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.865 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.866 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.882 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.923 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:41:56 np0005466030 nova_compute[230518]: 2025-10-02 13:41:56.941 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:41:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:57.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:41:57 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2919220754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:41:57 np0005466030 nova_compute[230518]: 2025-10-02 13:41:57.356 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:41:57 np0005466030 nova_compute[230518]: 2025-10-02 13:41:57.361 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:41:57 np0005466030 nova_compute[230518]: 2025-10-02 13:41:57.379 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:41:57 np0005466030 nova_compute[230518]: 2025-10-02 13:41:57.381 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:41:57 np0005466030 nova_compute[230518]: 2025-10-02 13:41:57.381 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:41:57 np0005466030 nova_compute[230518]: 2025-10-02 13:41:57.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:41:58 np0005466030 nova_compute[230518]: 2025-10-02 13:41:58.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:41:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:41:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:41:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:59.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:00 np0005466030 podman[331242]: 2025-10-02 13:42:00.802755839 +0000 UTC m=+0.055047887 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  2 09:42:00 np0005466030 podman[331243]: 2025-10-02 13:42:00.803608126 +0000 UTC m=+0.053931432 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  2 09:42:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:01.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:02.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:02 np0005466030 nova_compute[230518]: 2025-10-02 13:42:02.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:03.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:03 np0005466030 nova_compute[230518]: 2025-10-02 13:42:03.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:04 np0005466030 nova_compute[230518]: 2025-10-02 13:42:04.383 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:04.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:42:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:42:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:42:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:42:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:06.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:07 np0005466030 nova_compute[230518]: 2025-10-02 13:42:07.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:08 np0005466030 nova_compute[230518]: 2025-10-02 13:42:08.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:08 np0005466030 nova_compute[230518]: 2025-10-02 13:42:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:08 np0005466030 nova_compute[230518]: 2025-10-02 13:42:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:08.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:08 np0005466030 nova_compute[230518]: 2025-10-02 13:42:08.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:09 np0005466030 nova_compute[230518]: 2025-10-02 13:42:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:09.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:10.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:11 np0005466030 nova_compute[230518]: 2025-10-02 13:42:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:11 np0005466030 nova_compute[230518]: 2025-10-02 13:42:11.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:42:11 np0005466030 nova_compute[230518]: 2025-10-02 13:42:11.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:42:11 np0005466030 nova_compute[230518]: 2025-10-02 13:42:11.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:42:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:12.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:12 np0005466030 nova_compute[230518]: 2025-10-02 13:42:12.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:13 np0005466030 nova_compute[230518]: 2025-10-02 13:42:13.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:14 np0005466030 nova_compute[230518]: 2025-10-02 13:42:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:14 np0005466030 nova_compute[230518]: 2025-10-02 13:42:14.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:42:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:14.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:16.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:17 np0005466030 nova_compute[230518]: 2025-10-02 13:42:17.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:17.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:17 np0005466030 nova_compute[230518]: 2025-10-02 13:42:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:18.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:18 np0005466030 nova_compute[230518]: 2025-10-02 13:42:18.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.110661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539110728, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 458, "num_deletes": 250, "total_data_size": 613358, "memory_usage": 622592, "flush_reason": "Manual Compaction"}
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539187014, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 312887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94853, "largest_seqno": 95306, "table_properties": {"data_size": 310473, "index_size": 513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6550, "raw_average_key_size": 20, "raw_value_size": 305594, "raw_average_value_size": 952, "num_data_blocks": 23, "num_entries": 321, "num_filter_entries": 321, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412516, "oldest_key_time": 1759412516, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 76388 microseconds, and 1801 cpu microseconds.
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.187059) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 312887 bytes OK
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.187076) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.200628) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.200676) EVENT_LOG_v1 {"time_micros": 1759412539200667, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.200698) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 610526, prev total WAL file size 610526, number of live WAL files 2.
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.201340) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323539' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(305KB)], [195(14MB)]
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539201468, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 15360654, "oldest_snapshot_seqno": -1}
Oct  2 09:42:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 11521 keys, 11628843 bytes, temperature: kUnknown
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539310782, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 11628843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11559027, "index_size": 39879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 305337, "raw_average_key_size": 26, "raw_value_size": 11362244, "raw_average_value_size": 986, "num_data_blocks": 1497, "num_entries": 11521, "num_filter_entries": 11521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.311052) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11628843 bytes
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.354106) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.4 rd, 106.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(86.3) write-amplify(37.2) OK, records in: 12024, records dropped: 503 output_compression: NoCompression
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.354140) EVENT_LOG_v1 {"time_micros": 1759412539354128, "job": 126, "event": "compaction_finished", "compaction_time_micros": 109372, "compaction_time_cpu_micros": 30154, "output_level": 6, "num_output_files": 1, "total_output_size": 11628843, "num_input_records": 12024, "num_output_records": 11521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539354380, "job": 126, "event": "table_file_deletion", "file_number": 197}
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539357171, "job": 126, "event": "table_file_deletion", "file_number": 195}
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.201166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:19 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:42:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:20 np0005466030 podman[331281]: 2025-10-02 13:42:20.823257905 +0000 UTC m=+0.081404913 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:42:20 np0005466030 podman[331282]: 2025-10-02 13:42:20.823528673 +0000 UTC m=+0.078083659 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  2 09:42:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:20.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:22 np0005466030 nova_compute[230518]: 2025-10-02 13:42:22.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:23.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:23 np0005466030 nova_compute[230518]: 2025-10-02 13:42:23.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:24.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:25.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:42:26.005 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:42:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:42:26.006 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:42:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:42:26.006 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:42:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:26.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:27.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:27 np0005466030 nova_compute[230518]: 2025-10-02 13:42:27.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:28 np0005466030 nova_compute[230518]: 2025-10-02 13:42:28.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:29.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:31.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:31 np0005466030 podman[331327]: 2025-10-02 13:42:31.798094223 +0000 UTC m=+0.048055547 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:42:31 np0005466030 podman[331326]: 2025-10-02 13:42:31.798580828 +0000 UTC m=+0.053760436 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:42:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:32 np0005466030 nova_compute[230518]: 2025-10-02 13:42:32.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:33.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:33 np0005466030 nova_compute[230518]: 2025-10-02 13:42:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:34.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:35.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:36.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:37 np0005466030 nova_compute[230518]: 2025-10-02 13:42:37.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:38 np0005466030 nova_compute[230518]: 2025-10-02 13:42:38.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  2 09:42:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  2 09:42:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:39 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:40.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  2 09:42:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:42:41 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:41.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:42 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:42:42 np0005466030 nova_compute[230518]: 2025-10-02 13:42:42.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:42:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:42:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:43 np0005466030 nova_compute[230518]: 2025-10-02 13:42:43.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:44.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:45.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:46.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:47 np0005466030 nova_compute[230518]: 2025-10-02 13:42:47.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:48 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:48 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:48 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:48.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:48 np0005466030 nova_compute[230518]: 2025-10-02 13:42:48.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:49.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:50 np0005466030 nova_compute[230518]: 2025-10-02 13:42:50.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:50 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:50 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:50 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:50.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:51.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:51 np0005466030 podman[331498]: 2025-10-02 13:42:51.819302644 +0000 UTC m=+0.062985916 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  2 09:42:51 np0005466030 podman[331497]: 2025-10-02 13:42:51.891974672 +0000 UTC m=+0.137895584 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct  2 09:42:52 np0005466030 nova_compute[230518]: 2025-10-02 13:42:52.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:52 np0005466030 nova_compute[230518]: 2025-10-02 13:42:52.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  2 09:42:52 np0005466030 nova_compute[230518]: 2025-10-02 13:42:52.177 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  2 09:42:52 np0005466030 nova_compute[230518]: 2025-10-02 13:42:52.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:52 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:52 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:52 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:52.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:53.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:53 np0005466030 nova_compute[230518]: 2025-10-02 13:42:53.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:54 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:54 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:54 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:54.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:42:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:42:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:56 np0005466030 nova_compute[230518]: 2025-10-02 13:42:56.176 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:56 np0005466030 nova_compute[230518]: 2025-10-02 13:42:56.177 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:42:56 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:56 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:56 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:56.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.092 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:42:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:42:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:42:57 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:42:57 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3426009518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.523 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.664 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.665 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4141MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.665 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.665 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.815 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.816 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.844 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:42:57 np0005466030 nova_compute[230518]: 2025-10-02 13:42:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:58 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:42:58 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2860197508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:42:58 np0005466030 nova_compute[230518]: 2025-10-02 13:42:58.328 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:42:58 np0005466030 nova_compute[230518]: 2025-10-02 13:42:58.334 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:42:58 np0005466030 nova_compute[230518]: 2025-10-02 13:42:58.433 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:42:58 np0005466030 nova_compute[230518]: 2025-10-02 13:42:58.434 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:42:58 np0005466030 nova_compute[230518]: 2025-10-02 13:42:58.435 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:42:58 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:58 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:58 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:58.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:42:58 np0005466030 nova_compute[230518]: 2025-10-02 13:42:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:42:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:42:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:42:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:59.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:00 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:00 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:00 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:00.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:01.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:02 np0005466030 podman[331635]: 2025-10-02 13:43:02.7938444 +0000 UTC m=+0.046177589 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct  2 09:43:02 np0005466030 podman[331636]: 2025-10-02 13:43:02.798480635 +0000 UTC m=+0.047423708 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:43:02 np0005466030 nova_compute[230518]: 2025-10-02 13:43:02.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:02 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:02 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:02 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:02.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:03 np0005466030 nova_compute[230518]: 2025-10-02 13:43:03.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:04 np0005466030 nova_compute[230518]: 2025-10-02 13:43:04.435 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:04 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:04 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:04 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:43:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:43:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:43:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:43:06 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:06 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:06 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:07 np0005466030 nova_compute[230518]: 2025-10-02 13:43:07.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:08 np0005466030 nova_compute[230518]: 2025-10-02 13:43:08.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:08 np0005466030 nova_compute[230518]: 2025-10-02 13:43:08.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:08 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:08 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:08 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:08.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:08 np0005466030 nova_compute[230518]: 2025-10-02 13:43:08.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:09.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:10 np0005466030 nova_compute[230518]: 2025-10-02 13:43:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:10 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:10 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:10 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:10.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:11 np0005466030 nova_compute[230518]: 2025-10-02 13:43:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:12 np0005466030 nova_compute[230518]: 2025-10-02 13:43:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:12 np0005466030 nova_compute[230518]: 2025-10-02 13:43:12.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:43:12 np0005466030 nova_compute[230518]: 2025-10-02 13:43:12.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:43:12 np0005466030 nova_compute[230518]: 2025-10-02 13:43:12.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:43:12 np0005466030 nova_compute[230518]: 2025-10-02 13:43:12.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:12 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:12 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:12 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:12.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:13 np0005466030 nova_compute[230518]: 2025-10-02 13:43:13.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:13 np0005466030 nova_compute[230518]: 2025-10-02 13:43:13.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:14 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:14 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:14 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:14.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:16 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:16 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:16 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:17 np0005466030 nova_compute[230518]: 2025-10-02 13:43:17.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:18 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:18 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:18 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:18.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:18 np0005466030 nova_compute[230518]: 2025-10-02 13:43:18.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:19.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:20 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:20 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:20 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:20.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:21.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:22 np0005466030 podman[331675]: 2025-10-02 13:43:22.798822866 +0000 UTC m=+0.049396119 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  2 09:43:22 np0005466030 podman[331674]: 2025-10-02 13:43:22.847971547 +0000 UTC m=+0.091770278 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  2 09:43:22 np0005466030 nova_compute[230518]: 2025-10-02 13:43:22.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:22 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:22 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:22 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:23 np0005466030 nova_compute[230518]: 2025-10-02 13:43:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:24 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:24 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:24 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:24.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:43:26.007 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:43:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:43:26.007 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:43:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:43:26.007 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:43:26 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:26 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:26 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:27 np0005466030 nova_compute[230518]: 2025-10-02 13:43:27.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:28 np0005466030 nova_compute[230518]: 2025-10-02 13:43:28.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:28 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:28 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:28 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:29.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:30 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:30 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:30 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:31.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:32 np0005466030 nova_compute[230518]: 2025-10-02 13:43:32.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:32 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:32 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:32 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:33 np0005466030 podman[331723]: 2025-10-02 13:43:33.809609061 +0000 UTC m=+0.056181742 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:43:33 np0005466030 podman[331722]: 2025-10-02 13:43:33.80896179 +0000 UTC m=+0.059133375 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:43:33 np0005466030 nova_compute[230518]: 2025-10-02 13:43:33.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:34 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:34 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:34 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:34.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:36 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:36 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:36 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:37 np0005466030 nova_compute[230518]: 2025-10-02 13:43:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:38 np0005466030 nova_compute[230518]: 2025-10-02 13:43:38.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:38 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:38 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:38 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:40 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:40 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:40 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:42 np0005466030 nova_compute[230518]: 2025-10-02 13:43:42.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:42 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:42 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:42 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:42.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:43.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:43 np0005466030 nova_compute[230518]: 2025-10-02 13:43:43.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:44 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:44 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:44 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:44.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:46 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:46 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:46 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:47 np0005466030 nova_compute[230518]: 2025-10-02 13:43:47.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:48 np0005466030 nova_compute[230518]: 2025-10-02 13:43:48.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:51 np0005466030 nova_compute[230518]: 2025-10-02 13:43:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:52 np0005466030 nova_compute[230518]: 2025-10-02 13:43:52.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:53.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:53 np0005466030 podman[331763]: 2025-10-02 13:43:53.793943085 +0000 UTC m=+0.043930068 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 09:43:53 np0005466030 podman[331762]: 2025-10-02 13:43:53.834748355 +0000 UTC m=+0.087501924 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  2 09:43:53 np0005466030 nova_compute[230518]: 2025-10-02 13:43:53.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:55.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:43:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:55 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:43:56 np0005466030 nova_compute[230518]: 2025-10-02 13:43:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:56 np0005466030 nova_compute[230518]: 2025-10-02 13:43:56.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:43:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:57.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:43:57 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:43:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:57 np0005466030 nova_compute[230518]: 2025-10-02 13:43:57.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:58 np0005466030 nova_compute[230518]: 2025-10-02 13:43:58.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:43:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:43:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:59.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:43:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:43:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:43:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:43:59 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:43:59 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/955581800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.594 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.768 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.770 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4126MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.770 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.771 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.932 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.932 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:43:59 np0005466030 nova_compute[230518]: 2025-10-02 13:43:59.991 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:44:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:44:00 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1389360451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:44:00 np0005466030 nova_compute[230518]: 2025-10-02 13:44:00.450 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:44:00 np0005466030 nova_compute[230518]: 2025-10-02 13:44:00.455 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:44:00 np0005466030 nova_compute[230518]: 2025-10-02 13:44:00.488 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:44:00 np0005466030 nova_compute[230518]: 2025-10-02 13:44:00.489 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:44:00 np0005466030 nova_compute[230518]: 2025-10-02 13:44:00.490 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:44:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:01.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:44:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:01.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:44:02 np0005466030 nova_compute[230518]: 2025-10-02 13:44:02.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:03.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:03 np0005466030 nova_compute[230518]: 2025-10-02 13:44:03.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:04 np0005466030 nova_compute[230518]: 2025-10-02 13:44:04.489 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:04 np0005466030 podman[331979]: 2025-10-02 13:44:04.813161911 +0000 UTC m=+0.058179515 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  2 09:44:04 np0005466030 podman[331978]: 2025-10-02 13:44:04.824857098 +0000 UTC m=+0.075286351 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:44:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:07 np0005466030 nova_compute[230518]: 2025-10-02 13:44:07.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:08 np0005466030 nova_compute[230518]: 2025-10-02 13:44:08.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:44:08 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:44:08 np0005466030 nova_compute[230518]: 2025-10-02 13:44:08.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:09 np0005466030 nova_compute[230518]: 2025-10-02 13:44:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:09.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:11 np0005466030 nova_compute[230518]: 2025-10-02 13:44:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:11 np0005466030 nova_compute[230518]: 2025-10-02 13:44:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:11.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:12 np0005466030 nova_compute[230518]: 2025-10-02 13:44:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:12 np0005466030 nova_compute[230518]: 2025-10-02 13:44:12.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:44:12 np0005466030 nova_compute[230518]: 2025-10-02 13:44:12.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:44:12 np0005466030 nova_compute[230518]: 2025-10-02 13:44:12.177 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:44:12 np0005466030 nova_compute[230518]: 2025-10-02 13:44:12.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:13.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:13 np0005466030 nova_compute[230518]: 2025-10-02 13:44:13.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:17 np0005466030 nova_compute[230518]: 2025-10-02 13:44:17.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:18 np0005466030 nova_compute[230518]: 2025-10-02 13:44:18.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:19.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:22 np0005466030 nova_compute[230518]: 2025-10-02 13:44:22.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:23 np0005466030 nova_compute[230518]: 2025-10-02 13:44:23.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:24 np0005466030 podman[332067]: 2025-10-02 13:44:24.821509789 +0000 UTC m=+0.073454205 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  2 09:44:24 np0005466030 podman[332068]: 2025-10-02 13:44:24.830345075 +0000 UTC m=+0.080006349 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  2 09:44:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:25.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:25.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:44:26.008 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:44:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:44:26.008 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:44:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:44:26.008 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:44:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:27.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:27.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:27 np0005466030 nova_compute[230518]: 2025-10-02 13:44:27.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:28 np0005466030 nova_compute[230518]: 2025-10-02 13:44:28.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:29.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:31.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:31.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:32 np0005466030 nova_compute[230518]: 2025-10-02 13:44:32.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:33.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:33 np0005466030 nova_compute[230518]: 2025-10-02 13:44:33.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:35.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:35 np0005466030 podman[332114]: 2025-10-02 13:44:35.81715817 +0000 UTC m=+0.057736382 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct  2 09:44:35 np0005466030 podman[332115]: 2025-10-02 13:44:35.826814802 +0000 UTC m=+0.063977247 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  2 09:44:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:37.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:37.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:37 np0005466030 nova_compute[230518]: 2025-10-02 13:44:37.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:38 np0005466030 nova_compute[230518]: 2025-10-02 13:44:38.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:39.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:39.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:41.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:41.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:42 np0005466030 nova_compute[230518]: 2025-10-02 13:44:42.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:43.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:43 np0005466030 nova_compute[230518]: 2025-10-02 13:44:43.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:45.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:47.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:47.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:47 np0005466030 nova_compute[230518]: 2025-10-02 13:44:47.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:48 np0005466030 nova_compute[230518]: 2025-10-02 13:44:48.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:49.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.161177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689161199, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1634, "num_deletes": 255, "total_data_size": 3910360, "memory_usage": 3963376, "flush_reason": "Manual Compaction"}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689207591, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 2572239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95311, "largest_seqno": 96940, "table_properties": {"data_size": 2565284, "index_size": 4025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14215, "raw_average_key_size": 19, "raw_value_size": 2551404, "raw_average_value_size": 3543, "num_data_blocks": 177, "num_entries": 720, "num_filter_entries": 720, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412540, "oldest_key_time": 1759412540, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 46557 microseconds, and 6222 cpu microseconds.
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.207724) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 2572239 bytes OK
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.207794) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.212085) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.212121) EVENT_LOG_v1 {"time_micros": 1759412689212111, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.212144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 3902831, prev total WAL file size 3902831, number of live WAL files 2.
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.214173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373732' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(2511KB)], [198(11MB)]
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689214229, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14201082, "oldest_snapshot_seqno": -1}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 11712 keys, 14066784 bytes, temperature: kUnknown
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689343729, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 14066784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13993043, "index_size": 43356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 310202, "raw_average_key_size": 26, "raw_value_size": 13790255, "raw_average_value_size": 1177, "num_data_blocks": 1645, "num_entries": 11712, "num_filter_entries": 11712, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.344173) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14066784 bytes
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.346381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.5 rd, 108.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.1 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 12241, records dropped: 529 output_compression: NoCompression
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.346405) EVENT_LOG_v1 {"time_micros": 1759412689346393, "job": 128, "event": "compaction_finished", "compaction_time_micros": 129689, "compaction_time_cpu_micros": 34390, "output_level": 6, "num_output_files": 1, "total_output_size": 14066784, "num_input_records": 12241, "num_output_records": 11712, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689347374, "job": 128, "event": "table_file_deletion", "file_number": 200}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689350529, "job": 128, "event": "table_file_deletion", "file_number": 198}
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.214112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:44:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:51.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:52 np0005466030 nova_compute[230518]: 2025-10-02 13:44:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:52 np0005466030 nova_compute[230518]: 2025-10-02 13:44:52.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:53.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:44:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:53.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:53 np0005466030 nova_compute[230518]: 2025-10-02 13:44:53.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:55.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:44:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:55.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:55 np0005466030 podman[332158]: 2025-10-02 13:44:55.828934599 +0000 UTC m=+0.080018139 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  2 09:44:55 np0005466030 podman[332157]: 2025-10-02 13:44:55.854090259 +0000 UTC m=+0.107159921 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Oct  2 09:44:56 np0005466030 nova_compute[230518]: 2025-10-02 13:44:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:44:56 np0005466030 nova_compute[230518]: 2025-10-02 13:44:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:44:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:57.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:57 np0005466030 nova_compute[230518]: 2025-10-02 13:44:57.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:58 np0005466030 nova_compute[230518]: 2025-10-02 13:44:58.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:44:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:44:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:59.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:44:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:44:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:44:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:59.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:45:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:01.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:01.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:45:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2734375241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.594 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.735 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.736 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4143MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.737 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.737 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.841 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.842 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:45:01 np0005466030 nova_compute[230518]: 2025-10-02 13:45:01.871 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:45:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:45:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2899976604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:45:02 np0005466030 nova_compute[230518]: 2025-10-02 13:45:02.309 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:45:02 np0005466030 nova_compute[230518]: 2025-10-02 13:45:02.315 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:45:02 np0005466030 nova_compute[230518]: 2025-10-02 13:45:02.338 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:45:02 np0005466030 nova_compute[230518]: 2025-10-02 13:45:02.339 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:45:02 np0005466030 nova_compute[230518]: 2025-10-02 13:45:02.340 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:45:02 np0005466030 nova_compute[230518]: 2025-10-02 13:45:02.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:03.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:03.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:03 np0005466030 nova_compute[230518]: 2025-10-02 13:45:03.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:04 np0005466030 nova_compute[230518]: 2025-10-02 13:45:04.340 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:05.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:45:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:45:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:45:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:45:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:06 np0005466030 podman[332247]: 2025-10-02 13:45:06.816832574 +0000 UTC m=+0.067124945 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  2 09:45:06 np0005466030 podman[332248]: 2025-10-02 13:45:06.82113396 +0000 UTC m=+0.069256442 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  2 09:45:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:07.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:07.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:07 np0005466030 nova_compute[230518]: 2025-10-02 13:45:07.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:08 np0005466030 podman[332458]: 2025-10-02 13:45:08.653094292 +0000 UTC m=+0.065601979 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  2 09:45:08 np0005466030 podman[332458]: 2025-10-02 13:45:08.776902822 +0000 UTC m=+0.189410489 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct  2 09:45:08 np0005466030 nova_compute[230518]: 2025-10-02 13:45:08.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:09.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:09.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:10 np0005466030 nova_compute[230518]: 2025-10-02 13:45:10.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:10 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:11 np0005466030 nova_compute[230518]: 2025-10-02 13:45:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:11 np0005466030 nova_compute[230518]: 2025-10-02 13:45:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:11.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:45:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:11 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:45:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:12 np0005466030 nova_compute[230518]: 2025-10-02 13:45:12.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:12 np0005466030 nova_compute[230518]: 2025-10-02 13:45:12.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:45:12 np0005466030 nova_compute[230518]: 2025-10-02 13:45:12.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:45:12 np0005466030 nova_compute[230518]: 2025-10-02 13:45:12.130 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:45:12 np0005466030 nova_compute[230518]: 2025-10-02 13:45:12.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:13 np0005466030 nova_compute[230518]: 2025-10-02 13:45:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:13.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:13 np0005466030 nova_compute[230518]: 2025-10-02 13:45:13.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:15.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:17.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:17.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:17 np0005466030 nova_compute[230518]: 2025-10-02 13:45:17.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:18 np0005466030 nova_compute[230518]: 2025-10-02 13:45:18.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:18 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:45:18 np0005466030 nova_compute[230518]: 2025-10-02 13:45:18.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:19.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:21.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:21.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:22 np0005466030 nova_compute[230518]: 2025-10-02 13:45:22.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:23.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:23 np0005466030 nova_compute[230518]: 2025-10-02 13:45:23.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:25.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:45:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:45:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:45:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:45:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:45:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:45:26 np0005466030 podman[332767]: 2025-10-02 13:45:26.807440775 +0000 UTC m=+0.058256817 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  2 09:45:26 np0005466030 podman[332766]: 2025-10-02 13:45:26.836586369 +0000 UTC m=+0.088583038 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  2 09:45:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:27.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:27 np0005466030 nova_compute[230518]: 2025-10-02 13:45:27.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:28 np0005466030 nova_compute[230518]: 2025-10-02 13:45:28.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:32 np0005466030 nova_compute[230518]: 2025-10-02 13:45:32.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:33 np0005466030 nova_compute[230518]: 2025-10-02 13:45:33.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:35.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.565569) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735565599, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 751, "num_deletes": 251, "total_data_size": 1400583, "memory_usage": 1414496, "flush_reason": "Manual Compaction"}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735674779, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 913877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96945, "largest_seqno": 97691, "table_properties": {"data_size": 910235, "index_size": 1485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8415, "raw_average_key_size": 19, "raw_value_size": 902881, "raw_average_value_size": 2104, "num_data_blocks": 65, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412690, "oldest_key_time": 1759412690, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 109262 microseconds, and 3241 cpu microseconds.
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.674826) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 913877 bytes OK
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.674847) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.676745) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.676760) EVENT_LOG_v1 {"time_micros": 1759412735676755, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.676778) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1396575, prev total WAL file size 1396575, number of live WAL files 2.
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.677435) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(892KB)], [201(13MB)]
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735677469, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 14980661, "oldest_snapshot_seqno": -1}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 11624 keys, 12971342 bytes, temperature: kUnknown
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735788536, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 12971342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12899297, "index_size": 41852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 309047, "raw_average_key_size": 26, "raw_value_size": 12699070, "raw_average_value_size": 1092, "num_data_blocks": 1573, "num_entries": 11624, "num_filter_entries": 11624, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.788832) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12971342 bytes
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.790722) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.8 rd, 116.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(30.6) write-amplify(14.2) OK, records in: 12141, records dropped: 517 output_compression: NoCompression
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.790742) EVENT_LOG_v1 {"time_micros": 1759412735790732, "job": 130, "event": "compaction_finished", "compaction_time_micros": 111147, "compaction_time_cpu_micros": 29656, "output_level": 6, "num_output_files": 1, "total_output_size": 12971342, "num_input_records": 12141, "num_output_records": 11624, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735791114, "job": 130, "event": "table_file_deletion", "file_number": 203}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735794507, "job": 130, "event": "table_file_deletion", "file_number": 201}
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.677361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:35 np0005466030 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  2 09:45:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:37.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:37 np0005466030 podman[332813]: 2025-10-02 13:45:37.802177298 +0000 UTC m=+0.054934583 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:45:37 np0005466030 podman[332814]: 2025-10-02 13:45:37.803060176 +0000 UTC m=+0.054307084 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  2 09:45:37 np0005466030 nova_compute[230518]: 2025-10-02 13:45:37.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:39 np0005466030 nova_compute[230518]: 2025-10-02 13:45:38.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:39.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:45:39 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 71K writes, 276K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s#012Cumulative WAL: 71K writes, 26K syncs, 2.68 writes per sync, written: 0.27 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 455 writes, 777 keys, 455 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 455 writes, 199 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:45:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct  2 09:45:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct  2 09:45:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:41.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:42 np0005466030 nova_compute[230518]: 2025-10-02 13:45:42.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:43.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:43.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:44 np0005466030 nova_compute[230518]: 2025-10-02 13:45:44.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:47.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:47 np0005466030 nova_compute[230518]: 2025-10-02 13:45:47.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:49 np0005466030 nova_compute[230518]: 2025-10-02 13:45:49.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:49.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:49.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:51.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:51.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:52 np0005466030 nova_compute[230518]: 2025-10-02 13:45:52.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:53.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:54 np0005466030 nova_compute[230518]: 2025-10-02 13:45:54.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:54 np0005466030 nova_compute[230518]: 2025-10-02 13:45:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:45:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:55.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:55.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:57 np0005466030 nova_compute[230518]: 2025-10-02 13:45:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:45:57 np0005466030 nova_compute[230518]: 2025-10-02 13:45:57.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:45:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:45:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:45:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:57.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:57 np0005466030 podman[332858]: 2025-10-02 13:45:57.812689841 +0000 UTC m=+0.060044253 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  2 09:45:57 np0005466030 podman[332857]: 2025-10-02 13:45:57.832366848 +0000 UTC m=+0.090203938 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  2 09:45:57 np0005466030 nova_compute[230518]: 2025-10-02 13:45:57.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:59 np0005466030 nova_compute[230518]: 2025-10-02 13:45:59.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:45:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:59.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:45:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:45:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:45:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:59.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.161 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:46:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:01.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:01.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:01 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:46:01 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3288497061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.596 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.734 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.736 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4139MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.736 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:46:01 np0005466030 nova_compute[230518]: 2025-10-02 13:46:01.736 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.145 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.145 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.278 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:46:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:46:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3746623552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.706 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.711 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.768 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.769 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.769 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:46:02 np0005466030 nova_compute[230518]: 2025-10-02 13:46:02.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:03.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:04 np0005466030 nova_compute[230518]: 2025-10-02 13:46:04.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:05.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  2 09:46:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  2 09:46:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  2 09:46:05 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  2 09:46:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:05 np0005466030 nova_compute[230518]: 2025-10-02 13:46:05.770 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:07.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:07.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:07 np0005466030 nova_compute[230518]: 2025-10-02 13:46:07.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:08 np0005466030 podman[332951]: 2025-10-02 13:46:08.808203196 +0000 UTC m=+0.054834430 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  2 09:46:08 np0005466030 podman[332950]: 2025-10-02 13:46:08.821933217 +0000 UTC m=+0.068323013 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  2 09:46:09 np0005466030 nova_compute[230518]: 2025-10-02 13:46:09.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:09.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:46:10 np0005466030 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1410 writes, 7107 keys, 1410 commit groups, 1.0 writes per commit group, ingest: 15.01 MB, 0.03 MB/s#012Interval WAL: 1410 writes, 1410 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.3      2.22              0.36        65    0.034       0      0       0.0       0.0#012  L6      1/0   12.37 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6    117.5    101.2      6.79              2.08        64    0.106    529K    34K       0.0       0.0#012 Sum      1/0   12.37 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     88.5     89.9      9.01              2.45       129    0.070    529K    34K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0     87.1     88.7      1.00              0.26        12    0.083     72K   3107       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    117.5    101.2      6.79              2.08        64    0.106    529K    34K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.3      2.22              0.36        64    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.120, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.79 GB write, 0.11 MB/s write, 0.78 GB read, 0.11 MB/s read, 9.0 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 86.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000582 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5373,83.15 MB,27.3505%) FilterBlock(129,1.43 MB,0.469905%) IndexBlock(129,2.32 MB,0.763231%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  2 09:46:11 np0005466030 nova_compute[230518]: 2025-10-02 13:46:11.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:11.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:11.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:12 np0005466030 nova_compute[230518]: 2025-10-02 13:46:12.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:12 np0005466030 nova_compute[230518]: 2025-10-02 13:46:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:12 np0005466030 nova_compute[230518]: 2025-10-02 13:46:12.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:14 np0005466030 nova_compute[230518]: 2025-10-02 13:46:14.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:14 np0005466030 nova_compute[230518]: 2025-10-02 13:46:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:14 np0005466030 nova_compute[230518]: 2025-10-02 13:46:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:46:14 np0005466030 nova_compute[230518]: 2025-10-02 13:46:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:46:14 np0005466030 nova_compute[230518]: 2025-10-02 13:46:14.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:46:15 np0005466030 nova_compute[230518]: 2025-10-02 13:46:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:46:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:46:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:15.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:17.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:17 np0005466030 nova_compute[230518]: 2025-10-02 13:46:17.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:19 np0005466030 nova_compute[230518]: 2025-10-02 13:46:19.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  2 09:46:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:46:19 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  2 09:46:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:19.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:21.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:22 np0005466030 nova_compute[230518]: 2025-10-02 13:46:22.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:23.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:23 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:23 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:23 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:23.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:24 np0005466030 nova_compute[230518]: 2025-10-02 13:46:24.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:25 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:25 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:25 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:25 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:25.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:46:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:46:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:46:26.010 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:46:26 np0005466030 ovn_metadata_agent[138369]: 2025-10-02 13:46:26.010 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:46:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:46:26 np0005466030 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct  2 09:46:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:27 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:27 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:27 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:27.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:27 np0005466030 nova_compute[230518]: 2025-10-02 13:46:27.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:28 np0005466030 podman[333177]: 2025-10-02 13:46:28.83916015 +0000 UTC m=+0.087979029 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  2 09:46:28 np0005466030 podman[333176]: 2025-10-02 13:46:28.856071411 +0000 UTC m=+0.106479530 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  2 09:46:29 np0005466030 nova_compute[230518]: 2025-10-02 13:46:29.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:29 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:29 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:29 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:29.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:30 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:31.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:31 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:31 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:31 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:31.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:32 np0005466030 nova_compute[230518]: 2025-10-02 13:46:32.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:33.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:33 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:33 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:33 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:33.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:34 np0005466030 nova_compute[230518]: 2025-10-02 13:46:34.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:35 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:35 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:35 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:35 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:35.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:37.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:37 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:37 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:37 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:37.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:37 np0005466030 nova_compute[230518]: 2025-10-02 13:46:37.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:39 np0005466030 nova_compute[230518]: 2025-10-02 13:46:39.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:39.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:39 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:39 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:39 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:39.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:39 np0005466030 podman[333223]: 2025-10-02 13:46:39.799462492 +0000 UTC m=+0.054977933 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct  2 09:46:39 np0005466030 podman[333224]: 2025-10-02 13:46:39.819766959 +0000 UTC m=+0.073324539 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  2 09:46:40 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:41.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:41 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:41 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:41 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:41.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:42 np0005466030 nova_compute[230518]: 2025-10-02 13:46:42.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:43 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:43 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:43 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:43.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:44 np0005466030 nova_compute[230518]: 2025-10-02 13:46:44.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:45 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:45 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:45 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:45 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:45.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:47.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:47 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:47 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:47 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:47 np0005466030 nova_compute[230518]: 2025-10-02 13:46:47.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:49 np0005466030 nova_compute[230518]: 2025-10-02 13:46:49.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:49.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:49 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:49 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:49 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:49.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:50 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:51.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:51 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:51 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:51 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:51.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:52 np0005466030 nova_compute[230518]: 2025-10-02 13:46:52.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:46:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:46:53 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:53 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:53 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:54 np0005466030 nova_compute[230518]: 2025-10-02 13:46:54.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:54 np0005466030 nova_compute[230518]: 2025-10-02 13:46:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:55 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:46:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:55.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:55 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:55 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:55 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:55.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:57.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:57 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:57 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:57 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:57 np0005466030 nova_compute[230518]: 2025-10-02 13:46:57.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:58 np0005466030 nova_compute[230518]: 2025-10-02 13:46:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:46:58 np0005466030 nova_compute[230518]: 2025-10-02 13:46:58.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  2 09:46:58 np0005466030 systemd-logind[795]: New session 65 of user zuul.
Oct  2 09:46:58 np0005466030 systemd[1]: Started Session 65 of User zuul.
Oct  2 09:46:59 np0005466030 nova_compute[230518]: 2025-10-02 13:46:59.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:46:59 np0005466030 podman[333297]: 2025-10-02 13:46:59.079256098 +0000 UTC m=+0.096386592 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  2 09:46:59 np0005466030 podman[333296]: 2025-10-02 13:46:59.07932206 +0000 UTC m=+0.109471373 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct  2 09:46:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:59.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:46:59 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:46:59 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:46:59 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:00 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:01.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:01 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:01 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:01 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.122 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.124 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.124 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:47:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  2 09:47:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/473589523' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  2 09:47:02 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:47:02 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1498764034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.578 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.736 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.737 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4094MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  2 09:47:02 np0005466030 nova_compute[230518]: 2025-10-02 13:47:02.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.046 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.046 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.191 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.234 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.235 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.253 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.278 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  2 09:47:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:03.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.294 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  2 09:47:03 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:03 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:47:03 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:03.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:47:03 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  2 09:47:03 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/240607502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.745 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.751 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.825 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.827 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  2 09:47:03 np0005466030 nova_compute[230518]: 2025-10-02 13:47:03.827 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  2 09:47:04 np0005466030 nova_compute[230518]: 2025-10-02 13:47:04.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:05 np0005466030 ovs-vsctl[333638]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  2 09:47:05 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:05 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:05 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:05 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:05 np0005466030 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  2 09:47:05 np0005466030 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  2 09:47:06 np0005466030 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  2 09:47:06 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: cache status {prefix=cache status} (starting...)
Oct  2 09:47:06 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:06 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: client ls {prefix=client ls} (starting...)
Oct  2 09:47:06 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:06 np0005466030 lvm[333984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  2 09:47:06 np0005466030 lvm[333984]: VG ceph_vg0 finished
Oct  2 09:47:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:47:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:07.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: damage ls {prefix=damage ls} (starting...)
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  2 09:47:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3622600288' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump loads {prefix=dump loads} (starting...)
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:07 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:07 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:07 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:07 np0005466030 nova_compute[230518]: 2025-10-02 13:47:07.829 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:07 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  2 09:47:07 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1778923128' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  2 09:47:07 np0005466030 nova_compute[230518]: 2025-10-02 13:47:07.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  2 09:47:07 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  2 09:47:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/571185985' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  2 09:47:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2798236164' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  2 09:47:08 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  2 09:47:08 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3224226010' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: ops {prefix=ops} (starting...)
Oct  2 09:47:08 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:09 np0005466030 nova_compute[230518]: 2025-10-02 13:47:09.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:47:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1393102153' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:47:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:47:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:09.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:47:09 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: session ls {prefix=session ls} (starting...)
Oct  2 09:47:09 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct  2 09:47:09 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:09 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:09 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:09 np0005466030 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: status {prefix=status} (starting...)
Oct  2 09:47:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:47:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3569749802' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:47:09 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  2 09:47:09 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2328061460' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1056629558' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1861714374' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2714691879' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:47:10 np0005466030 podman[334526]: 2025-10-02 13:47:10.825089867 +0000 UTC m=+0.064894696 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  2 09:47:10 np0005466030 podman[334523]: 2025-10-02 13:47:10.844599458 +0000 UTC m=+0.086507933 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true)
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  2 09:47:10 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2721180045' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  2 09:47:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  2 09:47:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2241957471' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  2 09:47:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:47:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:47:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:47:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4215150955' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:47:11 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:11 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:47:11 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:11.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:47:11 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  2 09:47:11 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1977807061' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  2 09:47:12 np0005466030 nova_compute[230518]: 2025-10-02 13:47:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  2 09:47:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1645271435' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  2 09:47:12 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  2 09:47:12 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/392775669' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  2 09:47:12 np0005466030 nova_compute[230518]: 2025-10-02 13:47:12.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:13 np0005466030 nova_compute[230518]: 2025-10-02 13:47:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564654829' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4979220 data_alloc: 234881024 data_used: 24694784
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff8000/0x0/0x1bfc00000, data 0x370cba3/0x3916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,8])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434ff0d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 66K writes, 260K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s#012Cumulative WAL: 66K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8130 writes, 30K keys, 8130 commit groups, 1.0 writes per commit group, ingest: 33.16 MB, 0.06 MB/s#012Interval WAL: 8131 writes, 3070 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3e000 session 0x559437e561e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559436f7e3c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559437bcb680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559436ec6d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.497853279s of 17.155778885s, submitted: 27
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc ms_handle_reset ms_handle_reset con 0x5594350d6c00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470188032 unmapped: 60719104 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3d800 session 0x559437c432c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 55402496 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943a64d800 session 0x559436dde960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559437cb0780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436c50f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559436c51a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434c623c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x5594352863c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434c6e960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4858586 data_alloc: 218103808 data_used: 12795904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436ddfc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea0400 session 0x559434c63a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434c65680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470532096 unmapped: 64053248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470540288 unmapped: 64045056 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.158052444s of 18.919492722s, submitted: 23
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473210880 unmapped: 61374464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955009 data_alloc: 234881024 data_used: 21942272
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473415680 unmapped: 61169664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e12d000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959757 data_alloc: 234881024 data_used: 21839872
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437bcab40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953597 data_alloc: 234881024 data_used: 21839872
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559437c57e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437bcb860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.585352898s of 14.913706779s, submitted: 79
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559434c630e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.042637825s of 15.562180519s, submitted: 27
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468082688 unmapped: 66502656 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437b0b680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559436ddfc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559435e4f0e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883222 data_alloc: 218103808 data_used: 12795904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437b0a960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351521e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467337216 unmapped: 67248128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e333000/0x0/0x1bfc00000, data 0x3231ba3/0x343b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883106 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559435f61e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467353600 unmapped: 67231744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.326089859s of 19.245832443s, submitted: 49
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19dbdd000/0x0/0x1bfc00000, data 0x3980bc6/0x3b8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471351296 unmapped: 63234048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5037977 data_alloc: 234881024 data_used: 24559616
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db4a000/0x0/0x1bfc00000, data 0x3a13bc6/0x3c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039897 data_alloc: 234881024 data_used: 24702976
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2e000/0x0/0x1bfc00000, data 0x3a35bc6/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x559437e56f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039685 data_alloc: 234881024 data_used: 24715264
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435286d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.604205132s of 15.311234474s, submitted: 113
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434ce12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437cb0d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039749 data_alloc: 234881024 data_used: 24715264
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db28000/0x0/0x1bfc00000, data 0x3a3abd5/0x3c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437e56b40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809599 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd6000/0x0/0x1bfc00000, data 0x278db50/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 67854336 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559437e565a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67829760 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x5594351423c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466812928 unmapped: 67772416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.131870270s of 10.003231049s, submitted: 300
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466829312 unmapped: 67756032 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809039 data_alloc: 218103808 data_used: 12795904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4807462 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.209687233s of 10.373024940s, submitted: 38
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435f643c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809244 data_alloc: 218103808 data_used: 12791808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd9000/0x0/0x1bfc00000, data 0x278db31/0x2995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x5594386f9400 session 0x559436ec6b40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x559436ea2400 session 0x559437e52f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 396 heartbeat osd_stat(store_statfs(0x19edd4000/0x0/0x1bfc00000, data 0x278f7ec/0x2999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434792c00 session 0x559437105e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434fee000 session 0x559435152960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816765 data_alloc: 218103808 data_used: 12808192
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.111035347s of 10.823334694s, submitted: 35
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.684392929s of 18.692880630s, submitted: 12
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437b0ad20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4824530 data_alloc: 218103808 data_used: 12816384
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edcd000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bcad20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435de12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4823650 data_alloc: 218103808 data_used: 12816384
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559437bca960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edce000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436fa2780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437d090e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559435d8d860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435142b40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559436c514a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851916 data_alloc: 218103808 data_used: 12816384
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436c4d860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559435152d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.884056091s of 13.213858604s, submitted: 20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bca780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467066880 unmapped: 67518464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467075072 unmapped: 67510272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883747 data_alloc: 218103808 data_used: 16490496
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55943cdd2800 session 0x5594351423c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885237 data_alloc: 218103808 data_used: 16494592
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.991956711s of 10.035635948s, submitted: 8
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19ea20000/0x0/0x1bfc00000, data 0x2b3d85b/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559434fee000 session 0x559437c572c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468312064 unmapped: 66273280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909236 data_alloc: 218103808 data_used: 16621568
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.320636749s of 19.659267426s, submitted: 22
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559436ea1c00 session 0x5594370ede00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469295104 unmapped: 65290240 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x5594386f9400 session 0x559436c512c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d729000/0x0/0x1bfc00000, data 0x2c967f9/0x2ea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4914910 data_alloc: 218103808 data_used: 16629760
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436cb0c00 session 0x559436fa2000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559436c4a000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559434792c00 session 0x559434ce0780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559434c63a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917912 data_alloc: 218103808 data_used: 16633856
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470343680 unmapped: 64241664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918844 data_alloc: 218103808 data_used: 16736256
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.274785995s of 26.369352341s, submitted: 37
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920738 data_alloc: 218103808 data_used: 16736256
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922338 data_alloc: 218103808 data_used: 17092608
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922590 data_alloc: 218103808 data_used: 17092608
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.667269707s of 12.846582413s, submitted: 6
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436cb0c00 session 0x559435142780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x5594386f9400 session 0x559434c8c5a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x559434c63c20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559434fee000 session 0x559435f65680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x5594370ecf00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843049 data_alloc: 218103808 data_used: 12832768
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d814000/0x0/0x1bfc00000, data 0x2799fb2/0x29a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843097 data_alloc: 218103808 data_used: 12832768
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.570983887s of 10.014533043s, submitted: 55
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 403 ms_handle_reset con 0x559434792c00 session 0x559436dde780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 403 heartbeat osd_stat(store_statfs(0x19d811000/0x0/0x1bfc00000, data 0x279bc5f/0x29ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 404 ms_handle_reset con 0x559436c5a800 session 0x559437e565a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559436c50f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c6e960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4852403 data_alloc: 218103808 data_used: 12840960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559437b0b4a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.887578964s of 17.205017090s, submitted: 46
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb0c00 session 0x559435de1e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594351d4d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09c20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467877888 unmapped: 66707456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559437d08d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x5594377ae780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4888135 data_alloc: 218103808 data_used: 12840960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x5594386f9400 session 0x559437d081e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.796065331s of 10.041505814s, submitted: 15
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c8c5a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4926114 data_alloc: 218103808 data_used: 17371136
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472064000 unmapped: 62521344 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19ceb5000/0x0/0x1bfc00000, data 0x30f6413/0x3309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.967789173s of 10.081938744s, submitted: 32
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468148224 unmapped: 66437120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4969600 data_alloc: 218103808 data_used: 17580032
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,5])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda1000/0x0/0x1bfc00000, data 0x3208413/0x341b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.758423805s of 18.601793289s, submitted: 23
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.873358727s of 11.896842003s, submitted: 17
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975934 data_alloc: 218103808 data_used: 17391616
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d08d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559435ddc1e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559434c6e960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468254720 unmapped: 66330624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.424508095s of 15.441400528s, submitted: 12
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.449235916s of 15.471648216s, submitted: 15
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb1c00 session 0x559437cb0d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4982142 data_alloc: 218103808 data_used: 17915904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594371054a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437ead000 session 0x559436c4ba40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437438000 session 0x559435ddd680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861028 data_alloc: 218103808 data_used: 12840960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80c000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 406 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4865202 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.777113914s of 15.972728729s, submitted: 61
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 65200128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 65167360 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.048465729s of 45.059295654s, submitted: 15
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x5594351d41e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436fa2960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559435286000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559437c425a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437ead000 session 0x559435153860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1b000/0x0/0x1bfc00000, data 0x318dbff/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559436ddeb40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559435f65860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437b0b860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436c4a3c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 72630272 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.403457642s of 18.507741928s, submitted: 19
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c465000/0x0/0x1bfc00000, data 0x3b3cc0f/0x3d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5102963 data_alloc: 234881024 data_used: 24010752
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.284965515s of 12.586762428s, submitted: 112
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x55943cdd2800 session 0x559435ddc780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437c42780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559434c63a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x5594370ed4a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559434c65e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436ec6b40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559435152780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 61.791675568s of 61.911861420s, submitted: 35
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436f0f4a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2c0f/0x29b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437e53c20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437c42f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594350d7000 session 0x559437c565a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559437b0bc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559437e52960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918731 data_alloc: 218103808 data_used: 12849152
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559435915c20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f5000/0x0/0x1bfc00000, data 0x2cb2c0f/0x2ec9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559436f6b680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594358acc00 session 0x559437c56b40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.014217377s of 17.533912659s, submitted: 26
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 474546176 unmapped: 68435968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5014409 data_alloc: 218103808 data_used: 18165760
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7e2000/0x0/0x1bfc00000, data 0x37c4c32/0x39dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5048133 data_alloc: 218103808 data_used: 18374656
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7dc000/0x0/0x1bfc00000, data 0x37cac32/0x39e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5047641 data_alloc: 218103808 data_used: 18374656
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559436ec72c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.816077232s of 12.077077866s, submitted: 68
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [408,408], i have 408, src has [1,408]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436ea1c00 session 0x559435ddc000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 408 heartbeat osd_stat(store_statfs(0x19c7d8000/0x0/0x1bfc00000, data 0x37cec32/0x39e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476463104 unmapped: 66519040 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559434792800 session 0x559436ddf4a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436e8c800 session 0x559436fa34a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476594176 unmapped: 66387968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487800832 unmapped: 61882368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5197727 data_alloc: 234881024 data_used: 29822976
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559442cf9800 session 0x5594351d4f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487817216 unmapped: 61865984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 409 ms_handle_reset con 0x559434792800 session 0x559434ce12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487825408 unmapped: 61857792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488038400 unmapped: 61644800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214357 data_alloc: 234881024 data_used: 29831168
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516270638s of 12.276865005s, submitted: 37
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436e8c800 session 0x559436c4d680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437bcb4a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436ea1c00 session 0x559435f67c20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x5594384a4000 session 0x559435e4e780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559434792800 session 0x559437c43a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481468416 unmapped: 68214784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba44000/0x0/0x1bfc00000, data 0x455bd4e/0x4779000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437c42b40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 68059136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 68050944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 67788800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.335296631s of 16.486038208s, submitted: 17
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482648064 unmapped: 67035136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280461 data_alloc: 251658240 data_used: 38633472
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5285421 data_alloc: 251658240 data_used: 39116800
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030493736s of 10.215833664s, submitted: 10
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5286573 data_alloc: 251658240 data_used: 39518208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436f95400 session 0x559436f0e960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5288733 data_alloc: 251658240 data_used: 39571456
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.819889069s of 19.845161438s, submitted: 9
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559437eac800 session 0x559437c561e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943beb7000 session 0x559437c43e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943776fc00 session 0x559437bcaf00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559434792800 session 0x559434ce0960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 412 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487571456 unmapped: 62111744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 413 ms_handle_reset con 0x559435132800 session 0x559435de12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487579648 unmapped: 62103552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559436f95400 session 0x559434c62000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x5594377af680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x559435915e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559434792800 session 0x5594359145a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487227392 unmapped: 62455808 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 414 heartbeat osd_stat(store_statfs(0x199ba7000/0x0/0x1bfc00000, data 0x63f235e/0x6616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487235584 unmapped: 62447616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5538217 data_alloc: 251658240 data_used: 44425216
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559435132800 session 0x5594351d4d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436e8c800 session 0x559434c8dc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436ea1c00 session 0x559434ff1860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 415 heartbeat osd_stat(store_statfs(0x19b9cc000/0x0/0x1bfc00000, data 0x45ccfb5/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 416 ms_handle_reset con 0x559434792800 session 0x559436f6a1e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5321486 data_alloc: 251658240 data_used: 44027904
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 416 heartbeat osd_stat(store_statfs(0x19b9ee000/0x0/0x1bfc00000, data 0x4564a8b/0x4787000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.964550972s of 13.325811386s, submitted: 218
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487325696 unmapped: 62357504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483909632 unmapped: 65773568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137584 data_alloc: 234881024 data_used: 29847552
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434792c00 session 0x5594351d4780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434fee000 session 0x559436c512c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483917824 unmapped: 65765376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 75980800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559436e8c800 session 0x559436f7f2c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4949106 data_alloc: 218103808 data_used: 12886016
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.211503029s of 10.501904488s, submitted: 95
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 69K writes, 270K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s#012Cumulative WAL: 69K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2949 writes, 9921 keys, 2949 commit groups, 1.0 writes per commit group, ingest: 9.42 MB, 0.02 MB/s#012Interval WAL: 2949 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.431304932s of 35.444664001s, submitted: 15
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954123 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436f7f860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4d2c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437bca3c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.737661362s of 12.753558159s, submitted: 5
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x559437bcab40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955630 data_alloc: 218103808 data_used: 12898304
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436c4a5a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559436c51680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954907 data_alloc: 218103808 data_used: 12898304
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.274845123s of 10.169509888s, submitted: 30
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559435d8d860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954212 data_alloc: 218103808 data_used: 12894208
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437b0be00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x5594377afc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x5594352874a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953898 data_alloc: 218103808 data_used: 16105472
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.661519051s of 10.000641823s, submitted: 19
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4dc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5032045 data_alloc: 218103808 data_used: 16105472
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19cde8000/0x0/0x1bfc00000, data 0x31b1d9f/0x33d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.522792816s of 12.859356880s, submitted: 17
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559436f95400 session 0x559436dde1e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559437bcb860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559435ddcd20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436f7f680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559437e53c20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559437b0a3c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cdc0000/0x0/0x1bfc00000, data 0x31d7a5a/0x33fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559436f6be00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436c510e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5113591 data_alloc: 234881024 data_used: 24510464
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.465334892s of 10.475492477s, submitted: 2
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559435286d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476913664 unmapped: 72769536 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde4000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559435915860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x55943776fc00 session 0x559437c43860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559437b0ad20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434792c00 session 0x559435152d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19cde1000/0x0/0x1bfc00000, data 0x31b56a5/0x33dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434fee000 session 0x559434c8d680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472285184 unmapped: 77398016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.788832664s of 10.246772766s, submitted: 174
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19d7dc000/0x0/0x1bfc00000, data 0x27bb6a5/0x29e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559437eac800 session 0x559437d08d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559436f68800 session 0x559437d09c20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 72843264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476856320 unmapped: 72826880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x5594351d5860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4973367 data_alloc: 218103808 data_used: 16130048
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476848128 unmapped: 72835072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972486 data_alloc: 218103808 data_used: 16134144
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594351d4f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.671635628s of 10.225932121s, submitted: 147
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478871552 unmapped: 70811648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd20d/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436ec6d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477257728 unmapped: 72425472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559437eac800 session 0x559436ddeb40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559435132000 session 0x559436c4c780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x559437d08960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594370f4f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.234014511s of 20.326173782s, submitted: 33
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436f7ed20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.673128128s of 12.695754051s, submitted: 6
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483557376 unmapped: 66125824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19cd68000/0x0/0x1bfc00000, data 0x2e1d246/0x3046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,5])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482705408 unmapped: 66977792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5106862 data_alloc: 234881024 data_used: 19513344
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482721792 unmapped: 66961408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c637000/0x0/0x1bfc00000, data 0x3548246/0x3771000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123676 data_alloc: 234881024 data_used: 20525056
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.902449608s of 11.393979073s, submitted: 107
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484753408 unmapped: 64929792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5121647 data_alloc: 234881024 data_used: 20537344
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437b0a3c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61e000/0x0/0x1bfc00000, data 0x3565274/0x3790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.034794807s of 11.101709366s, submitted: 10
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x55943a64d400 session 0x559435e4f0e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437bcb860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5126189 data_alloc: 234881024 data_used: 20541440
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559434ff12c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x5594352874a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x5594377afc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5134778 data_alloc: 234881024 data_used: 20692992
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.602036476s of 11.628366470s, submitted: 7
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157810 data_alloc: 234881024 data_used: 20692992
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ec000/0x0/0x1bfc00000, data 0x389c070/0x37c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157842 data_alloc: 234881024 data_used: 20692992
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484892672 unmapped: 64790528 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484253696 unmapped: 65429504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484261888 unmapped: 65421312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c4ea000/0x0/0x1bfc00000, data 0x399e070/0x38c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5190426 data_alloc: 234881024 data_used: 23044096
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.007882118s of 15.059731483s, submitted: 11
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5219236 data_alloc: 234881024 data_used: 23040000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5213860 data_alloc: 234881024 data_used: 23044096
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.056324005s of 11.241897583s, submitted: 37
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c457000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559436c51680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559437cb1a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x559437d081e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c486000/0x0/0x1bfc00000, data 0x3a3003d/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5204715 data_alloc: 234881024 data_used: 22904832
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437cb1e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.740118027s of 13.175251007s, submitted: 17
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437c57a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184485 data_alloc: 234881024 data_used: 22790144
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559437d09860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559437eac800 session 0x559435152d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559436c5bc00 session 0x559436c51e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5172513 data_alloc: 234881024 data_used: 22794240
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c616000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.032597542s of 10.602803230s, submitted: 41
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559434792c00 session 0x559435d8d680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5175215 data_alloc: 234881024 data_used: 22802432
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19c613000/0x0/0x1bfc00000, data 0x356a829/0x379a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006019 data_alloc: 218103808 data_used: 16154624
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d012000/0x0/0x1bfc00000, data 0x27c2829/0x29f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 425 ms_handle_reset con 0x559434fee000 session 0x5594351430e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c27c7/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.051541328s of 11.549398422s, submitted: 39
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5004722 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c4456/0x29f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482099200 unmapped: 67584000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 426 ms_handle_reset con 0x559434792800 session 0x559436eab2c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5007079 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.208436966s of 10.811527252s, submitted: 28
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559435d8d0e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559435153860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559436c4b0e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.400998116s of 10.534256935s, submitted: 15
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594358ad400 session 0x559436fa30e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,1,0,1,4])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435de10e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1af90/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559436fa30e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485810176 unmapped: 63873024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.333019257s of 21.654922485s, submitted: 47
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559437d09860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5058092 data_alloc: 218103808 data_used: 16162816
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5095692 data_alloc: 234881024 data_used: 21016576
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096012 data_alloc: 234881024 data_used: 21024768
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.898363113s of 14.440773964s, submitted: 5
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487833600 unmapped: 61849600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19cb00000/0x0/0x1bfc00000, data 0x307dfec/0x32ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140300 data_alloc: 234881024 data_used: 21843968
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5138620 data_alloc: 234881024 data_used: 21848064
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559437d081e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436e8d000 session 0x559436f6b680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.954394341s of 10.175483704s, submitted: 76
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435f67e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5dc00 session 0x559435d8c3c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594450ed400 session 0x559437d08000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559436fa3860
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.853529930s of 15.902283669s, submitted: 13
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559435286d20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.303579330s of 12.341936111s, submitted: 8
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5144999 data_alloc: 234881024 data_used: 22102016
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.465478897s of 13.490488052s, submitted: 6
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c8d0e0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488128512 unmapped: 61554688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5177867 data_alloc: 234881024 data_used: 22937600
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x5594377aed20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943725c800 session 0x559435d8c780
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179373 data_alloc: 234881024 data_used: 22937600
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488153088 unmapped: 61530112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.730504036s of 11.801014900s, submitted: 24
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559436f7fc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488456192 unmapped: 61227008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183366 data_alloc: 234881024 data_used: 22941696
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488464384 unmapped: 61218816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.934249878s of 12.953603745s, submitted: 6
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5225783 data_alloc: 234881024 data_used: 24535040
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c62f000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.297815323s of 15.375535965s, submitted: 9
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559437c56960
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.852976799s of 10.868772507s, submitted: 5
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x559436f7eb40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c6f680
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220223 data_alloc: 234881024 data_used: 24690688
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c653000/0x0/0x1bfc00000, data 0x36ecc22/0x375a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437503400 session 0x559437b0a5a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559437cb1e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559435f61e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5188901 data_alloc: 234881024 data_used: 24559616
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559437438c00 session 0x559437c565a0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3d000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918631554s of 10.093006134s, submitted: 66
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559436c3e400 session 0x559435f61a40
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437105e00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5182857 data_alloc: 234881024 data_used: 24567808
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3e000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437c56000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19ca3a000/0x0/0x1bfc00000, data 0x314040e/0x3373000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5186216 data_alloc: 234881024 data_used: 24576000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559434792800 session 0x559437e532c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559436c3e400 session 0x559437c432c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 63029248 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config show' '{prefix=config show}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485990400 unmapped: 63692800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 497090560 unmapped: 52592640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'perf dump' '{prefix=perf dump}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'perf schema' '{prefix=perf schema}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485679104 unmapped: 64004096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 63987712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 63987712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 63963136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 63963136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 63954944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 63954944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 63938560 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485761024 unmapped: 63922176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 71K writes, 275K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s#012Cumulative WAL: 71K writes, 26K syncs, 2.69 writes per sync, written: 0.27 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1937 writes, 5766 keys, 1937 commit groups, 1.0 writes per commit group, ingest: 5.33 MB, 0.01 MB/s#012Interval WAL: 1937 writes, 854 syncs, 2.27 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485842944 unmapped: 63840256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485842944 unmapped: 63840256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 63791104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 63791104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 63791104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485916672 unmapped: 63766528 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485957632 unmapped: 63725568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485957632 unmapped: 63725568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485957632 unmapped: 63725568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485965824 unmapped: 63717376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486014976 unmapped: 63668224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486023168 unmapped: 63660032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486039552 unmapped: 63643648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486047744 unmapped: 63635456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 288.169189453s of 288.322509766s, submitted: 73
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486055936 unmapped: 63627264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486064128 unmapped: 63619072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486080512 unmapped: 63602688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486105088 unmapped: 63578112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486121472 unmapped: 63561728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039208 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486146048 unmapped: 63537152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486146048 unmapped: 63537152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486170624 unmapped: 63512576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,2,0,1])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486195200 unmapped: 63488000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486203392 unmapped: 63479808 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486236160 unmapped: 63447040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486236160 unmapped: 63447040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486236160 unmapped: 63447040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486285312 unmapped: 63397888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486285312 unmapped: 63397888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486285312 unmapped: 63397888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486293504 unmapped: 63389696 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486293504 unmapped: 63389696 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486293504 unmapped: 63389696 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486318080 unmapped: 63365120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486318080 unmapped: 63365120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486342656 unmapped: 63340544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486350848 unmapped: 63332352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486350848 unmapped: 63332352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486350848 unmapped: 63332352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486367232 unmapped: 63315968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486367232 unmapped: 63315968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486367232 unmapped: 63315968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486375424 unmapped: 63307776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486375424 unmapped: 63307776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486383616 unmapped: 63299584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486383616 unmapped: 63299584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486400000 unmapped: 63283200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486400000 unmapped: 63283200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486400000 unmapped: 63283200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486408192 unmapped: 63275008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486449152 unmapped: 63234048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486449152 unmapped: 63234048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486449152 unmapped: 63234048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486481920 unmapped: 63201280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486481920 unmapped: 63201280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486481920 unmapped: 63201280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486522880 unmapped: 63160320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486522880 unmapped: 63160320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486531072 unmapped: 63152128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486531072 unmapped: 63152128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 63029248 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486744064 unmapped: 62939136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486760448 unmapped: 62922752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486760448 unmapped: 62922752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486768640 unmapped: 62914560 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486785024 unmapped: 62898176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486785024 unmapped: 62898176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486785024 unmapped: 62898176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486801408 unmapped: 62881792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486801408 unmapped: 62881792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486817792 unmapped: 62865408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486817792 unmapped: 62865408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486817792 unmapped: 62865408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486834176 unmapped: 62849024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486834176 unmapped: 62849024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486834176 unmapped: 62849024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486866944 unmapped: 62816256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486866944 unmapped: 62816256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486866944 unmapped: 62816256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486875136 unmapped: 62808064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486875136 unmapped: 62808064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486899712 unmapped: 62783488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486899712 unmapped: 62783488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486899712 unmapped: 62783488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486924288 unmapped: 62758912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486924288 unmapped: 62758912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486965248 unmapped: 62717952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486965248 unmapped: 62717952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486998016 unmapped: 62685184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486998016 unmapped: 62685184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487014400 unmapped: 62668800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487014400 unmapped: 62668800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487038976 unmapped: 62644224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487038976 unmapped: 62644224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487038976 unmapped: 62644224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487047168 unmapped: 62636032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487071744 unmapped: 62611456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487145472 unmapped: 62537728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487145472 unmapped: 62537728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487145472 unmapped: 62537728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487161856 unmapped: 62521344 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487161856 unmapped: 62521344 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487161856 unmapped: 62521344 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487178240 unmapped: 62504960 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487178240 unmapped: 62504960 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487211008 unmapped: 62472192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487235584 unmapped: 62447616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487260160 unmapped: 62423040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487260160 unmapped: 62423040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487260160 unmapped: 62423040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 71K writes, 276K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s#012Cumulative WAL: 71K writes, 26K syncs, 2.68 writes per sync, written: 0.27 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 455 writes, 777 keys, 455 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 455 writes, 199 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc ms_handle_reset ms_handle_reset con 0x559436c3e000
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: mgrc handle_mgr_configure stats_period=5
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559434792c00 session 0x559437b0a3c0
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559436c5bc00 session 0x559436ec6f00
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x5594350d7400 session 0x559436ddfc20
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487374848 unmapped: 62308352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487391232 unmapped: 62291968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487391232 unmapped: 62291968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487391232 unmapped: 62291968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487399424 unmapped: 62283776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487399424 unmapped: 62283776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487399424 unmapped: 62283776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487415808 unmapped: 62267392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487424000 unmapped: 62259200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487424000 unmapped: 62259200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487432192 unmapped: 62251008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487448576 unmapped: 62234624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487473152 unmapped: 62210048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487473152 unmapped: 62210048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 62185472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:13.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 62185472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 62185472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487514112 unmapped: 62169088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487522304 unmapped: 62160896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487522304 unmapped: 62160896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487522304 unmapped: 62160896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487555072 unmapped: 62128128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487555072 unmapped: 62128128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487555072 unmapped: 62128128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config show' '{prefix=config show}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487702528 unmapped: 61980672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}'
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487514112 unmapped: 62169088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct  2 09:47:13 np0005466030 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}'
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4214691663' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3678720938' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  2 09:47:13 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:13 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:13 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  2 09:47:13 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3968431496' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  2 09:47:14 np0005466030 nova_compute[230518]: 2025-10-02 13:47:14.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:14 np0005466030 nova_compute[230518]: 2025-10-02 13:47:14.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:14 np0005466030 nova_compute[230518]: 2025-10-02 13:47:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:14 np0005466030 nova_compute[230518]: 2025-10-02 13:47:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  2 09:47:14 np0005466030 nova_compute[230518]: 2025-10-02 13:47:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  2 09:47:14 np0005466030 nova_compute[230518]: 2025-10-02 13:47:14.113 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  2 09:47:14 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  2 09:47:14 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3951671801' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  2 09:47:15 np0005466030 nova_compute[230518]: 2025-10-02 13:47:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  2 09:47:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1067755395' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  2 09:47:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct  2 09:47:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:15.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct  2 09:47:15 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:15 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:47:15 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:47:15 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  2 09:47:15 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1739021306' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1965547968' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3440469174' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2969688570' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3042551414' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2672268410' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  2 09:47:16 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3946441983' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3676004763' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  2 09:47:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:17.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/868572869' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2457636684' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  2 09:47:17 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:17 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:17 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2065031953' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  2 09:47:17 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2951718361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  2 09:47:17 np0005466030 nova_compute[230518]: 2025-10-02 13:47:17.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:18 np0005466030 systemd[1]: Starting Hostname Service...
Oct  2 09:47:18 np0005466030 systemd[1]: Started Hostname Service.
Oct  2 09:47:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct  2 09:47:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3340405755' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  2 09:47:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct  2 09:47:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1209891436' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  2 09:47:18 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct  2 09:47:18 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4036422266' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  2 09:47:19 np0005466030 nova_compute[230518]: 2025-10-02 13:47:19.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  2 09:47:19 np0005466030 nova_compute[230518]: 2025-10-02 13:47:19.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:19.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:19 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:19 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct  2 09:47:19 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct  2 09:47:19 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct  2 09:47:19 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3080644985' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  2 09:47:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  2 09:47:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct  2 09:47:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2646368920' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  2 09:47:20 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct  2 09:47:20 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/791524135' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  2 09:47:21 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct  2 09:47:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2251973729' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  2 09:47:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:21.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:47:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:47:21 np0005466030 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct  2 09:47:21 np0005466030 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct  2 09:47:21 np0005466030 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  2 09:47:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  2 09:47:21 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  2 09:47:22 np0005466030 nova_compute[230518]: 2025-10-02 13:47:22.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  2 09:47:22 np0005466030 nova_compute[230518]: 2025-10-02 13:47:22.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  2 09:47:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct  2 09:47:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1550219910' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  2 09:47:22 np0005466030 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct  2 09:47:22 np0005466030 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/500212777' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  2 09:47:22 np0005466030 nova_compute[230518]: 2025-10-02 13:47:22.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
